report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
DOD codified its DOD Executive Agent program in 2002 and issued a directive, DOD Directive 5101.1, that defines a DOD Executive Agent and establishes the roles and responsibilities governing the DOD Executive Agent assignments and arrangements. DOD officials told us that the department issued a directive for its DOD Executive Agent program in part because the term DOD Executive Agent had been used to describe a variety of management arrangements, and DOD Directive 5101.1 was intended to clarify the term. For example, in 1998, DOD identified approximately 401 Executive Agents within the military departments. However, after the directive was issued in 2002, ODCMO officials stated they worked with identified Executive Agents to determine which were to remain DOD Executive Agents under the directive. As a result, the number of activities and programs with the title of DOD Executive Agent was significantly reduced. For example, the Joint Interagency Task Force West was referred to as U.S. Pacific Command’s Executive Agent to support law enforcement for counterdrug efforts in the Asia-Pacific region. However, according to ODCMO officials, this task force was not considered to be an official DOD Executive Agent per DOD Directive 5101.1, and ODCMO officials removed its DOD Executive Agent designation. For issuances published before March 25, 2012, DOD policy is that directives are to be updated or cancelled after 10 years. ODCMO officials told us that they are in the process of updating DOD Directive 5101.1, certified current in 2003, but did not have a firm deadline for when the directive will be updated. DOD Executive Agent designations are conferred when 1) the efforts of more than one DOD component needs to be coordinated and no existing means to accomplish DOD objectives exists, 2) DOD resources need to be focused on a specific area or areas of responsibility in order to minimize duplication or redundancy, or 3) such designation is required by law, executive order, or government-wide regulation. Further, within the scope of its assigned responsibilities and functions, the authority of the DOD Executive Agent takes precedence over the authority of other DOD component officials performing related or collateral joint or multicomponent support responsibilities and functions. A DOD Executive Agent is the head of a DOD component. The DOD Executive Agent may delegate the authority to act to a subordinate designee within that official’s component. For example, the Secretary of the Army is the designated DOD Executive Agent for DOD Biometrics, and has delegated that responsibility to the Army’s Provost Marshall. DOD Directive 5101.1 assigns ODCMO the overall program management of the DOD Executive Agent program. Specifically, ODCMO oversees the implementation of the DOD Executive Agent directive, develops policy on DOD Executive Agent designations, and issues guidelines as appropriate to further define responsibilities contained in DOD Directive 5101.1. An OSD Principal Staff Assistant oversees the activities of DOD Executive Agents in their functional areas of responsibility. In addition, DOD Directive 5101.1 states that the OSD Principal Staff Assistant should assess the DOD Executive Agents in their functional areas periodically, but not less than once every 3 years, to determine the DOD Executive Agent’s continued need, currency, and effectiveness and efficiency in satisfying end-user requirements. According to ODCMO officials, these OSD Principal Staff Assistants are the Under Secretaries of Defense, the Deputy Chief Management Officer, the General Counsel of DOD, the Inspector General of DOD, and those Assistant Secretaries of Defense, Assistants to the Secretary of Defense, and OSD Directors, and equivalents, who report directly to the Secretary or Deputy Secretary of Defense. Typically, the OSD Principal Staff Assistants assess DOD Executive Agents within their functional areas. For example, the Under Secretary of Defense for Acquisition, Technology and Logistics would assess DOD Executive Agents involved in acquisition and logistics related areas, such as the DOD Executive Agents for Medical Material, Subsistence, Construction and Barrier Material, and Bulk Petroleum that are tasked with managing the logistics of supplying these products across the department. Only the Secretary of Defense or the Deputy Secretary of Defense may designate a DOD Executive Agent, and the designation remains in effect until the Secretary of Defense or the Deputy Secretary of Defense revokes or supersedes it. According to ODCMO officials, the Secretary or Deputy Secretary of Defense designates a DOD Executive Agent after an evaluation of existing organizational and management arrangements and a determination that a DOD Executive Agent would most effectively, economically, or efficiently carry out a function or task. However, according to ODCMO officials, the head of a DOD component may volunteer as a DOD Executive Agent and may formally request the Secretary or Deputy Secretary of Defense make the assignment, or an OSD Principal Staff Assistant may propose that the Secretary or Deputy Secretary of Defense assign a DOD component as a DOD Executive Agent. ODCMO officials stated this typically happens when a military department, defense agency, or a combatant command has substantial responsibility or expertise to execute a task on behalf of DOD, or the function is particularly sensitive or complex as differentiated from its overall organic mission. ODCMO officials also stated that DOD Executive Agent designations are typically formalized in a Secretary or Deputy Secretary of Defense memorandum, with direction to establish a DOD issuance to codify the specifics of the DOD Executive Agent arrangement at a later date. ODCMO officials stated that the issuance is important, as the designation of the title of DOD Executive Agent by itself confers no specific responsibilities. The nature and scope of the authority delegated must be stated in the memorandum or DOD issuance designating the DOD Executive Agent. According to ODCMO officials, funding of specific DOD Executive Agent activities is not determined at the time of assignment. Rather, the designated DOD Executive Agent seeks resources through DOD’s planning and budgeting process. Further, according to ODCMO officials, the DOD Executive Agent often bears the major share of the cost to execute the assigned responsibilities. However, ODCMO officials explained that, as necessary, funding determinations between the DOD Executive Agent and other DOD stakeholders are negotiated through memorandums of agreement or understanding and DOD’s annual program and budget review process. We determined that DOD had 81 DOD Executive Agents focused on a variety of topics and designated to 12 different DOD components, as of May 2017. Almost half (38 of 81) or 47 percent of the DOD Executive Agents are designated to the Secretary of the Army and 68 of 81, or 84 percent, were designated to the Secretaries of the Army, Air Force, or Navy or the Commandant of the Marine Corps. In contrast, six DOD components had one DOD Executive Agent designation each. Additionally, 11 different OSD Principal Staff Assistants oversee 81 DOD Executive Agents. This information is based on our analysis of ODCMO’s list of DOD Executive Agents. Figure 1 shows the DOD Executive Agent designations by DOD component and by OSD Principal Staff Assistant. According to ODCMO officials, a DOD Executive Agent designation is typically assigned to the DOD component that is already involved in the work related to the DOD Executive Agent. Below are several types of activities DOD Executive Agents perform and an example of a DOD Executive Agent that performs the activity: Administrative Support—The Secretary of the Army, as the designated DOD Executive Agent for the U.S. Military Entrance Processing Command, is responsible for programming, budgeting, and funding all Military Entrance Processing Command operations. Developing Standards—The Director of the Defense Information Systems Agency, as the DOD Executive Agent for Information Technology Standards, is responsible for developing and maintaining information-technology standards. Developing Training Programs—The Secretary of the Air Force, as the DOD Executive Agent for Military Working Dogs, is responsible for developing required training programs and curricula for military working-dog instructors, kennel masters, and handlers. Technology Management—The Secretary of the Navy, as the DOD Executive Agent for Printed Circuit Board and Interconnect Technology, is responsible for developing and maintaining a technology roadmap to ensure that DOD has access to manufacturing capabilities and technical expertise necessary to meet future military requirements regarding this technology. Acquisition Support—The Commandant of the Marine Corps, as the DOD Executive Agent for Non-Lethal Weapons, is responsible for coordinating nonlethal weapon requirements across doctrine, organization, training, materiel, leadership and education, personnel, and facilities. Department-wide Visibility—The Secretary of the Army, as the DOD Executive Agent for the Unexploded Ordnance Center of Excellence, chairs the center and executes management oversight and funding responsibilities for the center. As part of our questionnaire for DOD Executive Agents, we asked about the reasons why DOD conferred the designation. In response, 51 percent (36 of 70) of DOD Executive Agents responding to our questionnaire reported that their designation was conferred to minimize the duplication or redundancy of DOD resources. Thirty-six percent (25 of 70) of DOD Executive Agents responding to our questionnaire reported that their designation was conferred because no other means existed for the department to accomplish its objective. Finally, 26 percent (18 of 70) of DOD Executive Agents responding to our questionnaire reported that their designation was conferred because it was required by law, executive order, or government-wide regulation. A majority of the DOD Executive Agents have OSD Principal Staff Assistants from one of three Under Secretaries of Defense. About half (35 of 81), or 43 percent, of the OSD Principal Staff Assistants for DOD Executive Agents are assigned to the Under Secretary of Defense for Acquisition, Technology and Logistics, while another 40 percent (32 of 81) of OSD Principal Staff Assistants are assigned to the Under Secretary of Defense for Personnel and Readiness or the Under Secretary of Defense for Policy. According to DOD Directive 5101.1, an OSD Principal Staff Assistant is to oversee the activities of DOD Executive Agents in their functional areas of responsibility. In addition, the OSD Principal Staff Assistant is also assigned to assess each DOD Executive Agent to determine the DOD Executive Agent’s continued need, currency, and effectiveness and efficiency in satisfying end-user requirements. Typically, the OSD Principal Staff Assistant is to assess DOD Executive Agents within their functional areas. For example: The Under Secretary of Defense for Acquisition, Technology and Logistics oversees 35 DOD Executive Agents and typically assesses those involved in acquisition and logistics-related areas, such as the Director of the Defense Logistics Agency, serving as the DOD Executive Agent for Medical Materiel, Subsistence, Construction and Barrier Materiel, and Bulk Petroleum and is tasked with managing the logistics of supplying these products across the department. In addition, the Under Secretary of Defense for Acquisition, Technology and Logistics also oversees two designations related to chemical and biological weapons and two designations related to the safety and security of biological toxins and hazards. The Under Secretary of Defense for Personnel and Readiness’s portfolio includes readiness; health affairs; training; and personnel requirements and management, including equal opportunity, morale, welfare, recreation, and quality-of-life matters. The Under Secretary of Defense for Personnel and Readiness oversees 20 DOD Executive Agents, including three designations related to language training or foreign language contracts; two designations related to recruitment and entrance processing; and the Armed Services Entertainment program. The Under Secretary of Defense for Policy’s portfolio includes all matters pertaining to the formulation of national security and defense policy. The office oversees 12 DOD Executive Agents, including two designations related to security cooperation activities and two designations related to multinational organizations. We found that DOD has weaknesses in its approach to tracking its DOD Executive Agents, resulting in ODCMO not having an accurate accounting of the number of DOD Executive Agents. According to DOD Directive 5101.1, ODCMO is responsible for developing, maintaining, monitoring, revising, and making available the list of DOD Executive Agent designations. However, we found that ODCMO did not maintain a list of DOD Executive Agents that was current or complete. For example, we found 10 designations on DOD’s list of DOD Executive Agents that were not accurate, including the following: Disestablished DOD Executive Agents: Three DOD Executive Agent designations that were on ODCMO’s list had been disestablished; however, they had not been removed from the list. For example, in October 2015, a Deputy Secretary of Defense memorandum disestablished the DOD Executive Agent for Space by redesignating it as the Principal DOD Space Advisor. ODCMO officials stated that they were aware that it had been disestablished, but had not removed it from the list until a directive, issued in June 2017, cancelled the designation for the DOD Executive Agent for Space. In another example, the DOD Executive Agent for Global Command and Control Systems should have been removed from ODCMO’s list in 2013. Inactive DOD Executive Agents: Two DOD Executive Agent designations were no longer considered active, meaning that while the designations have not been cancelled, the DOD Executive Agents are no longer performing the responsibilities of the DOD Executive Agents. DOD Directive 5101.1 states that the designations are to remain in effect until the Secretary of Defense or the Deputy Secretary of Defense revokes or supersedes them. However, the Secretary of Defense or Deputy Secretary of Defense has not issued any documentation to disestablish the DOD Executive Agents. Specifically, some Army officials from the Chemical Demilitarization Program stated that the responsibilities of the DOD Executive Agent had been completed in 2012 and thus the designation was no longer active. In the other example, officials from the DOD Executive Agent for DOD Civilian Police Officers and Security Guards Physical Fitness Standards Program stated that the directive for this program was updated in 2012 and reference to the DOD Executive Agent designation had been removed because the designation was no longer necessary. Officials stated that they intended to pursue the cancellation of the designation at a later date. Unclear DOD Executive Agent designations: Three DOD Executive Agent designations were unclear, such that they were not considered actual DOD Executive Agents, or officials in the relevant component had no knowledge of the designation. For example, Navy officials stated that they could not find any organization currently carrying out any responsibility related to the DOD Executive Agent for High School News Service or for the Force Protection of Military Sealift Assets. ODCMO officials told us that these may have been considered DOD Executive Agents at one time, but the arrangements were never documented. In the other example, the status of the DOD Executive Agent for the Global Positioning System is unclear since Air Force officials at the program stated that they do not use the term DOD Executive Agent to refer to the program and were unaware that the program was considered to be a DOD Executive Agent. ODCMO officials stated that a determination was likely made at some point to consider this organization a DOD Executive Agent, and therefore the organization was included on ODCMO’s list, but no official documentation was issued. Air Force Officials who track the Air Force’s DOD Executive Agents stated that the Global Positioning System program may have been considered a DOD Executive Agent at one time. Missing DOD Executive Agent Designation: One DOD Executive Agent designation was missing from ODCMO’s list. ODCMO’s list included an DOD Executive Agent for Weapons of Mass Destruction and Delivery Vehicle Elimination Operations in Libya, and the Defense Threat Reduction Agency was the designated DOD Executive Agent. However, Defense Threat Reduction Agency officials stated that there are actually two separate designations, one for such operations in Libya and one in Iraq. Both ODCMO and the Defense Threat Reduction Agency lost track of the designation for Iraq and it was not included in ODCMO’s list of DOD Executive Agents. Not an DOD Executive Agent: One DOD Executive Agent designation was on ODCMO’s list that ODCMO and Army officials agree should not have been considered as a DOD Executive Agent. According to ODCMO officials, the DOD Executive Agent for the Joint Center for International Security Force Assistance was inappropriately applied to the organization. Officials explained that the center is actually a Chairman’s Controlled Activity, which is another type of management arrangement the department uses. Per DOD policy, only the Secretary of Defense or Deputy Secretary of Defense may cancel a designation. Thus, Army officials stated that until official action is taken to document that the center is not an DOD Executive Agent, it will remain on ODCMO’s list and the Army will consider it a valid DOD Executive Agent. We also identified seven other designations that ODCMO may need to revisit. ODCMO officials stated that our review highlighted several designations that may no longer be considered active and require resolution. Specifically: Army officials with whom we spoke told us that 5 of the Army’s 38 designations may no longer be necessary and could be disestablished. Officials from the DOD Executive Agent for Weapons of Mass Destruction Elimination Operations and Delivery Vehicle Elimination Operations in Libya stated in their response to our questionnaire that the DOD Executive Agent’s 2004 designation is no longer needed, as considerable time has passed and the nature of U.S. government engagement and policies toward Libya have changed significantly. Officials from both the DOD Executive Agent and the OSD Principal Staff Assistant for the DOD Executive Agent for the Regional Centers for Security Studies stated that the designation may no longer be necessary, as the functions and responsibilities of this DOD Executive Agent are operating in a routine manner. According to ODCMO officials, a number of different circumstances may prompt the cancellation of a DOD Executive Agent designation, to include circumstances when the responsibilities of a DOD Executive Agent have become institutionalized as part of an office or agency. ODCMO controls its updates to the DOD Executive Agent list to ensure any changes are vetted through the appropriate offices. However, according to ODCMO officials, to maintain the list they rely on representatives from DOD Executive Agents to self-report any modifications to the DOD Executive Agent or contact information for relevant officials, which has resulted in some of the discrepancies described above. Aside from DOD Executive Agents self-reporting any changes, ODCMO officials stated that there is no process to ensure that all information on the list is current or complete. Furthermore, ODCMO officials stated that they have not issued guidance instructing DOD Executive Agent officials under what circumstances they should self- report changes. Moreover, we found that ODCMO does not have a process for being notified when a new DOD Executive Agent is established or when one is cancelled. ODCMO officials stated that they provide consultation upon request to other DOD components that are considering establishing a new DOD Executive Agent. However, officials stated they are not always consulted and may not become aware of the new DOD Executive Agent designation until after its establishment. For example, ODCMO officials stated that were not involved in the issuance of the January 2017 Deputy Secretary of Defense memorandum that announced the designation of the Secretary of the Army as the DOD Executive Agent for the DOD Biological Select Agent and Toxin Biosecurity Program. ODCMO officials told us they have, on at least one occasion, learned about interest in establishing a DOD Executive Agent for a function that another DOD Executive Agent was already addressing, and advised against its establishment. Furthermore, ODCMO officials said that a DOD Executive Agent designation can be removed from the list of DOD Executive Agents by cancelling or updating the DOD issuance that established the DOD Executive Agent. Even though ODCMO coordinates all issuances for the department, ODCMO officials stated that they are not informed of all changes in issuances related to DOD Executive Agent designations, such as when a designation is updated or cancelled. For example, as noted earlier, officials from the DOD Executive Agent for DOD Civilian Police Officers and Security Guards Physical Fitness Standards Program stated that reference to the DOD Executive Agent designation was removed as part of the 2012 update to the DOD directive for the DOD Executive Agent. However, ODCMO officials were not aware that the updated directive no longer included a reference to the DOD Executive Agent designation, and therefore ODCMO still had this DOD Executive Agent on its list. DOD Executive Agent officials stated that they intended to pursue the cancellation of the designation at a later date. When consulted on DOD issuances related to the establishment, disestablishment, or modification of a DOD Executive Agent–related issuance, ODCMO officials stated they advise the OSD Principal Staff Assistants, among others, to discretely identify the actions related to the DOD Executive Agent designation to facilitate their tracking. According to DOD Directive 5101.1, ODCMO is to issue guidelines, as appropriate, to define further the policies, responsibilities and functions, and authorities contained in the directive. This could include the process for notifying ODCMO when a change is made to a DOD Executive Agent, such as when one is established, removed, or modified. Standards for Internal Control in the Federal Government states that management should use high-quality information to achieve the entity’s objectives. Specifically, management obtains relevant data from reliable internal and external sources in a timely manner based on the identified information requirements. ODCMO officials agreed that they need to improve their tracking of DOD Executive Agents; however, they have not developed an approach for this. Without taking steps to ensure that it is accurately tracking its Executive Agents, ODCMO will not be able to effectively oversee the DOD Executive Agent program. An accurate list is an important tool to help ODCMO manage its DOD Executive Agent program, including ensuring that there is no overlap in efforts across the DOD Executive Agent designations. As a result, DOD’s list of Executive Agents will continue to be out dated and incomplete. According to the 70 DOD Executive Agents responding to our questionnaire, OSD Principal Staff Assistants responsible for assessing DOD Executive Agents have not conducted assessments of about half (37 of 70) of the DOD Executive Agents in the past 3 years, as required by DOD guidance. Of the remaining 33 DOD Executive Agents, 28 responded that their OSD Principal Staff Assistant assessed them. Moreover, of those 28 DOD Executive Agents, almost half (13 of 28) said their assessment was not documented or that they did not know whether documentation existed. Finally, 3 DOD Executive Agents responded that they did not know whether OSD Principal Staff Assistants had assessed them. (See fig. 2.) Among the DOD Executive Agents that indicated they were assessed and provided documentation of the assessment, we found that many did not meet all of the requirements for assessments as prescribed in DOD Directive 5101.1. Specifically, the OSD Principal Staff Assistants did not conduct the assessment at all or did not conduct it within the past 3 years. Of the 15 respondents who indicated that the assessment was documented, 12 either provided documentation, the text of the document in their response but not the document itself, or a citation to a DOD issuance related to the DOD Executive Agent that we were able to find independently. The documentation provided included, for example, minutes of annual meetings reviewing DOD Executive Agent programs, assessments the DOD Executive Agent directed independent consultants to conduct, or delegations of authority from the head of the component designated to be the DOD Executive Agent to other officials. Our review of these documents found that for half (6 of 12) of the DOD Executive Agents that provided documentation, the OSD Principal Staff Assistant did not conduct the assessment, and 3 of the 6 did not conduct it within the past 3 years, as shown in table 1 below. For example, the OSD Principal Staff Assistant did not conduct the assessments of the four DOD Executive Agents assigned to the Defense Logistics Agency (Subsistence, Bulk Petroleum, Construction/Barrier Material, and Medical Material). According to an official from the OSD Principal Staff Assistant’s office, the OSD Principal Staff Assistant delegated the responsibility to conduct the assessment directly to the DOD Executive Agent in one case, and in the other three cases the OSD Principal Staff Assistant approved the DOD Executive Agent’s decision to direct an independent consultant to conduct the assessments. In addition, according to Army officials, 2 of the 12 documented assessments should not be considered assessments. Specifically, Army officials from the Office of the Administrative Assistant to the Secretary of the Army, the office that manages the Army’s DOD Executive Agents, did not agree that the documentation submitted by two Army DOD Executive Agents should be considered an assessment. Specifically, the DOD Executive Agents for Chemical and Biological Defense Program and the DOD Executive Agent for the Contract Linguist Program submitted Army memorandums stating that the Secretary of the Army was delegating the responsibilities of the DOD Executive Agent to other offices within the Army. According to Army officials who prepared the memorandums, the Army did not conduct any review or assessment of the DOD Executive Agent while generating these memorandums. DOD Directive 5101.1 states that the OSD Principal Staff Assistant shall assess DOD Executive Agent assignments and arrangements associated with such assignments under their cognizance, as noted previously. The directive further states the assessments shall occur periodically, but not less than once every 3 years, to determine the DOD Executive Agent’s continued need, currency, and effectiveness and efficiency in satisfying end-user requirements. In addition, Standards for Internal Control in the Federal Government state that documentation is a necessary part of an effective internal control system, and is required for the effective design, implementation, and operating effectiveness of an entity’s internal control system. The directive also assigns ODCMO the responsibility for overseeing the implementation of the directive. ODCMO officials told us that they did not know whether the assessments were occurring, and neither requested nor received assessments. The officials stated that they have not ensured the completion of DOD Executive Agents assessments because they narrowly interpreted their responsibility to oversee the implementation of DOD Directive 5101.1. Specifically, ODCMO officials stated that their responsibilities were limited to providing advice to other DOD components that expressed interest in using the DOD Executive Agent designation and maintaining a list of DOD Executive Agent designations. Further, we found that when the assessments were completed, according to the officials, the assessments were not always documented. While DOD Directive 5101.1 does not require the assessments to be documented, in the absence of such documentation, the OSD Principal Staff Assistant cannot demonstrate it has conducted an assessment in the past 3 years or that the assessment reviewed the DOD Executive Agent’s continued need, currency, effectiveness, and efficiency in satisfying end-user requirements. According to DOD Directive 5101.1, ODCMO shall issue implementing guidance, which may include clarifying the responsibility of OSD Principal Staff Assistants in conducting assessments of DOD Executive Agents. ODCMO officials told us that they have not issued implementing guidance because they do not want to be prescriptive in how OSD Principal Staff Assistants should assess DOD Executive Agents, as each DOD Executive Agent designation is unique. Therefore, ODCMO wants to provide flexibility in how those OSD Principal Staff Assistants conduct the assessments, including how they define the terms continued need, currency, and effectiveness and efficiency in satisfying end-user requirements. However, ODCMO could issue implementing guidance that ensures that the assessments are completed and documented. Several OSD Principal Staff Assistants with whom we spoke also told us that additional ODCMO guidance could help clarify the assessment requirement. Without verifying that the OSD Principal Staff Assistants for all DOD Executive Agents have completed required assessments and providing implementing guidance requiring the documentation of the assessments, the department does not have reasonable assurance that OSD Principal Staff Assistants are assessing DOD Executive Agents or that DOD Executive Agents—as a management arrangement—are accomplishing department objectives. According to DOD officials, conducting these periodic assessments would assist the department in reviewing DOD Executive Agent designations to ensure that the department is managing its resources efficiently and effectively. DOD Executive Agents can help the department further achieve its range of objectives more efficiently and effectively when additional coordination is needed to focus DOD resources and minimize duplication or redundancy of activities, among other things. However, ODCMO faces challenges in overseeing DOD Executive Agents. For example, ODCMO has weaknesses in its approach to tracking its DOD Executive Agents, making it difficult to determine how effectively the office is carrying out its responsibilities. Further, ODCMO does not ensure that OSD Principal Staff Assistants are conducting required assessments or that these assessments are documented in a manner that supports that DOD Executive Agents were assessed for continued need, currency, and effectiveness and efficiency in meeting end-user needs. Given its oversight responsibility for the DOD Executive Agent program, ODCMO should to take action to ensure that requirements in DOD Directive 5101.1 are being met and that the program is being effectively implemented. Without this action, DOD does not know whether its Executive Agents are effective in meeting their intended purpose and may be missing opportunities to better manage its resources and activities department-wide. We recommend that DOD’s Deputy Chief Management Officer take the following three actions: strengthen its approach to track DOD Executive Agents to ensure that its list and contact information are current and complete; verify that the OSD Principal Staff Assistants for all DOD Executive Agents have completed their required assessments every 3 years; and issue implementing guidance that OSD Principal Staff Assistants should document the assessments of DOD Executive Agents, including documenting how the assessments address the DOD Executive Agents’ continued need, currency, and effectiveness and efficiency in meeting end-user needs. We provided a draft of this report to DOD for review and comment. In written comments, which are summarized below and reprinted in appendix II, DOD concurred with our recommendations. In addition, DOD provided technical comments, which we have incorporated into the report as appropriate. In its written comments, DOD stated that it plans to take several actions to implement the recommendations by the end of the first quarter of fiscal year 2018. Specifically, DOD stated that it will task the OSD Principal Staff Assistants to review the DOD Executive Agents under their cognizance, validate existing information, identify inaccuracies, and provide updated points of contact. In addition, DOD plans to issue guidance to the OSD Principal Staff Assistants to provide documentation of assessments completed in the last three years, and direct the OSD Principal Staff Assistants to initiate an assessment if one has not been completed in the last three years. Furthermore, this guidance will task OSD Principal Staff Assistants to conduct, document, and provide copies of these assessments for each DOD Executive Agent. Finally, DOD stated that the Deputy Chief Management Officer, once informed by the completed assessments of DOD Executive Agents, will take the necessary actions to enhance DOD Executive Agent oversight. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and the Deputy Chief Management Officer. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (213) 830-1011 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. The Office of the Deputy Chief Management Officer (ODCMO) maintains a list of DOD Executive Agents. The list includes information about each DOD Executive Agent, such as the title of the DOD Executive Agent assignment, the office assigned as the Office of the Secretary of Defense (OSD) Principal Staff Assistant, the department official who designated the DOD Executive Agent, and the date of the DOD Executive Agent assignment. To describe the number of DOD Executive Agents, we analyzed DOD’s list and DOD issuances designating the DOD Executive Agent assignment, and contacted department officials of each DOD Executive Agent. Below are four tables listing the DOD Executive Agent responsibilities assigned to the Secretary of the Army (see table 2), the Secretary of the Air Force (see table 3), the Secretary of the Navy, including the Marine Corps (see table 4), and the heads of other DOD components (see table 5). In addition to the individual named above, key contributors to this report were Tina Won Sherman (Assistant Director), Angeline Bickner, Саroŀynn Саvаnuagh, Tim DiNapoli, Mae Frances Jones, Lori Kmetz, Kirsten Lauber, Shari Nikoo, Daniel Ramsey, Michael Silver, and Matthew Ullengren.
DOD maintains military forces with unparalleled capabilities. However, the department continues to confront weaknesses in the management of its business functions that support these forces. DOD uses Executive Agents, which are intended to facilitate collaboration, to achieve critical department objectives. Senate Report 114-255, accompanying a bill for the National Defense Authorization Act for Fiscal Year 2017, included a provision that GAO review DOD Executive Agents. This report (1) describes the number and focus of DOD Executive Agents; and evaluates the extent to which DOD (2) tracks its Executive Agents and (3) conducts periodic assessments of its Executive Agents. GAO reviewed relevant DOD directives and the list of Executive Agents; developed and implemented a questionnaire to DOD's Executive Agents; and interviewed relevant DOD officials. Based on GAO's analysis, the Department of Defense (DOD) has 81 Executive Agents—management arrangements where the head of a DOD component is designated specific roles and responsibilities to accomplish objectives when more than one component is involved. These Executive Agents are assigned to 12 DOD components and support a range of activities, including managing technology and developing training programs. The Secretary of the Army is designated as the Executive Agent for almost half of them (38 of 81). DOD's Executive Agent directive requires that the Office of the Deputy Chief Management Officer (ODCMO) maintain a list of Executive Agent designations and oversee their assessments, among other things. Office of the Secretary of Defense (OSD) Principal Staff Assistants are required to assess their respective Executive Agents every 3 years to determine their continued need, currency, efficiency, and effectiveness. GAO found weaknesses in DOD's approach to tracking its Executive Agents, resulting in inaccuracies regarding 10 Executive Agents. For example, DOD's list of Executive Agents included several that are not currently active. While ODCMO is required to maintain a list of Executive Agents, ODCMO officials rely on self-reported information from DOD Executive Agents and OSD Principal Staff Assistants. Without taking steps to accurately track DOD Executive Agents, DOD's list will continue to be out dated and ODCMO cannot effectively oversee DOD Executive Agents. Principal Staff Assistants had not periodically assessed more than half (37 of 70) of DOD Executive Agents that responded to GAO's questionnaire (see figure). ODCMO is responsible for overseeing the implementation of DOD's Executive Agents directive, which requires that Principal Staff Assistants conduct assessments; however, ODCMO officials told GAO they do not ensure that Principal Staff Assistants have conducted these assessments. GAO also found that Principal Staff Assistants are not required to document these assessments. Without verifying the completion of these assessments and issuing guidance requiring their documentation, DOD does not have reasonable assurance that DOD Executive Agents are accomplishing department objectives. GAO recommends that ODCMO strengthen its approach to track DOD Executive Agents; verify assessments are conducted; and issue implementing guidance for documenting assessments. DOD concurred with the recommendations.
In the United States, there are essentially two categories of drugs for distribution: prescription and nonprescription. Nonprescription drugs are often referred to as over-the-counter (OTC) medications (the terms are used interchangeably in this report). The term “prescription” has several meanings but generally refers to the order of a physician to a pharmacist for the delivery of certain medications to a patient. A prescription drug may be dispensed to a patient only on the basis of such an order. Nonprescription drugs are available for general sale without a prescription by self-service in pharmacies and in nonpharmacy outlets such as grocery stores, mass merchandisers, gas stations, and restaurants. The principal factors used to determine the prescription or nonprescription status of drugs are the margin of safety, method of use and collateral measures necessary to use, benefit-to-risk ratio, and adequacy of labeling for self-medication. Nonprescription drug sales were over $13 billion in 1992 and may reach $18 billion by the end of 1995 or 1996 (Covington, 1993, p. xxv). The importance of these medicines is growing, partly as a result of the reclassification of some commonly used drugs from prescription to nonprescription status. The two-tier system in the United States is unusual. Other countries typically have either more or different categories. There can be limitations on where and by whom a nonprescription drug can be sold. In some countries, the sale of some or all nonprescription drugs is restricted to pharmacies. Additionally, in some countries, certain nonprescription products have to be dispensed personally by a pharmacist. The 1951 Durham-Humphrey Amendment to the Federal Food, Drug, and Cosmetic Act of 1938 provided the statutory basis for the two-tier drug classification system in the United States. Since that time, there have been a number of proposals to introduce a third category of drugs in the United States. These proposals have been called by a number of names, including pharmacist-legend, pharmacist-only, third class of drugs, and transition class. Although there is some variation between them, the basic idea is the same: a class of drugs would be established that would be available only in pharmacies but no prescription would be needed. One variation is that the pharmacist would have to be personally involved in the sale of a drug in this class; a sales clerk could not sell the drug without the permission of the pharmacist. (For additional information on the history of this issue in the United States, see appendix I.) There are two general views on how an additional class of drugs would be used in the United States. The first, and the one advocated in the past by various pharmacist organizations such as the American Pharmaceutical Association (APhA) and the California Pharmacists Association, sees it as a permanent class. It would be similar to the current classes in that drugs would be placed in the class with no expectation that they would eventually be moved to the prescription or nonprescription class. Drugs in the new class would be thought not to be appropriate for use without some supervision by a health professional but a physician’s oversight would not be necessary. Drugs in this middle class could come from either the prescription or nonprescription classes, although it is generally believed that they should come from the prescription class. Opponents of this proposal have included the Nonprescription Drug Manufacturers Association (NDMA) and the American Medical Association (AMA). The second, advocated first in 1982 by the National Association of Retail Druggists (NARD) and currently supported by such groups as APhA and the National Consumers League, sees the intermediate class as a transition class. A drug that was being switched from prescription to nonprescription status would spend a period of time in the transition class, during which the suitability of the drug for general sale could be assessed. The assessment could be based not only on experiences with the drug as a prescription product (as is currently done) but also on experiences with the drug in the transition class, where it would not be limited to prescription sale. The argument is that this would give a better picture of how the drug would be used if it were available for general sale (that is, without a prescription and outside of pharmacies). Information that could be gathered while the drug was in the transition class includes types and levels of misuse among the general public, incidents of adverse drug reactions, and interactions with other medications. At the end of a specified period, the Food and Drug Administration (FDA) would decide to switch the drug to the general sale class, return the drug to prescription status, or keep the drug in the transition class for further study. This proposal has also been opposed by, among others, NDMA and AMA. The effect of an intermediate class of drugs in the United States would depend on whether the drugs in it would come from the prescription or general sale class. Figure 1.1 illustrates the several ways an intermediate drug class might function in the United States. Arguments for and against an intermediate class of drugs fall into two general (but sometimes related) categories: health and economic. Table 1.1 lists some of the arguments that have been put forth in support of and opposition to an intermediate class of drugs. Most of the arguments are relevant for both a fixed and a transition class. The principal difference between a fixed and a transition class is not the benefits and costs that would ensue but their goals. The goal of a transition class in the United States would be to facilitate the movement of drugs into the general sale category. The goal of a fixed class would be to place drugs permanently in the class. Many of the arguments for an intermediate class of drugs suggest that the quality of health care would improve if pharmacists’ involvement were greater. Proponents such as APhA argue that pharmacists are well trained in pharmacology and that their expertise is underused. They could play an important role in improving drug use. It is argued further that making use of this expertise is especially important for recently switched drugs whose potential for widespread abuse and toxicity is great. In the case of a transition class, Penna (1985) writes that pharmacists would be in a position to aid FDA in its switch decisions by maintaining records of the medications they dispense and by providing access to them to researchers assessing the safety and efficacy of these drugs. They might also be encouraged or required to report adverse drug reactions and be involved in postmarketing evaluation studies. Currently, FDA derives this information only from the use of drugs as prescription products. Some arguments against an intermediate class of drugs come from industry officials who have argued that while pharmacists have useful information to pass on to consumers, an intermediate class is not necessary for tapping into it. If customers are interested in getting advice from pharmacists, they can go to a pharmacy and ask for it but are not forced to do so. They also note some difficulties with an increased role for pharmacists. Counseling for nonprescription products is infrequent and sometimes inappropriate, and they argue that this would not change with the establishment of an intermediate class of drugs. In addition, consumers use nonprescription drugs responsibly. They read and understand drug labels. There is nothing for the pharmacist to add. NDMA agrees that pharmacists are well-trained in pharmaceuticals but believes that they are not trained in other roles—in particular, diagnosing illnesses (NDMA, 1992). Only physicians have this training and should be performing this role. Improper diagnosis could lead to treating symptoms rather than the underlying cause of an illness. Finally, opponents argue that the current two-tier system works well (NDMA, 1992). It is simple and effective. Either a drug is safe enough to be taken without medical supervision or it is not. There is no need for an intermediate class of drugs. To find out whether there would be significant advantages to creating an additional class of drugs, the Ranking Minority Member of the House Committee on Commerce asked us to examine the operation of drug distribution systems in 10 countries that have a pharmacist or pharmacy class of drugs and to compare these systems with that in the United States. To respond to this request, we posed specific evaluation questions: 1. What conclusions can be drawn from studies or reports on the development, operation, and consequences of different multiple-classification drug distribution systems? 2. What are the drug distribution systems for the 10 countries? 3. What drug distribution will be implemented in the European Union? 4. How does access to nonprescription drugs vary between the study countries and the United States? 5. How do pharmacists ensure the proper use of nonprescription drugs? 6. What is the U.S. experience with dispensing drugs without a physician’s prescription but only by pharmacists? Our purpose was to learn generally about factors that affect drug distribution in other countries and, in particular, about the perceived costs and benefits of a pharmacist or pharmacy class of drugs. This can raise important issues about the desirability or usefulness of such a class of drugs in the United States. By studying other countries, it is possible to bring empirical data to the debate. We examined the drug distribution systems in Australia, Canada, Denmark, France, Germany, Italy, the Netherlands, Sweden, Switzerland, and the United Kingdom. (See appendix II.) As requested, we also studied the harmonized system for the members of the European Union (EU). We examined the classification of the following 14 drugs: aspirin, cimetidine, codeine, diclofenac, diflunisal, ibuprofen, indomethacin, naproxen, phenylpropanolamine, promethazine, ranitidine, sulindac, terfenadine, and theophylline. (See appendix III and IV.) We chose these drugs because they either are past switches or have been suggested as candidates for switching in the United States or another country. We focused on an intermediate class of drugs as it has generally been discussed in the United States and practiced in other countries—that is, a class of nonprescription drugs available only in pharmacies or from a pharmacist. We did not assess the more general notion of pharmaceutical care, although we discuss it briefly in chapters 4 and 5. An intermediate class of drugs might be considered one form of pharmaceutical care. While some arguments and evidence regarding pharmaceutical care are therefore relevant for an intermediate class of drugs, a complete evaluation of pharmaceutical care was beyond our scope. To determine what is known about the operation of drug distribution systems that include a pharmacist or pharmacy class of drugs, we examined extant information and gathered expert opinion on six general issues. (1) The findings of studies on the health and economic effects of a pharmacist or pharmacy class. (2) The experiences of other countries and the European Union with a pharmacist or pharmacy class, including its use to move a drug to a general sale class, its usefulness in preventing drug abuse, and its effect on drug expenditures. (3) The effect on consumers’ access to nonprescription drugs of restricting their sale to pharmacies or personal sale by pharmacists. (4) The role of pharmacists in the study countries and the United States and the findings of studies on pharmacist counseling for nonprescription drugs. (5) The limited experience in the United States of pharmacists prescribing drugs without a physician’s involvement and of restricting some nonprescription drugs to sale only by pharmacists. We gathered information from a number of sources and used several data collection methods. We did not do independent analyses of data bases. We conducted computerized literature searches on the following topics: (1) drug distribution systems in the study countries, (2) the behavior of pharmacists, (3) the classification of the 14 drugs, (4) the advantages and disadvantages of an intermediate class of drugs, and (5) assessments of the health and economic effects of different drug distribution systems. We conducted interviews with officials of FDA involved in the regulation of prescription and nonprescription drugs, pharmacy associations, drug manufacturers, consumer groups, and drug manufacturer associations. We also interviewed academics who have written on this subject. In addition, we met with officials and academics in Florida to discuss their experiences with the Florida Pharmacist Self-Care Consultant Law (see appendix V). We requested information from government and pharmacy association officials in the 10 study countries. Because Canada’s individual provinces have a great deal of power over drug distribution, we also requested information from officials in Ontario. We sought to gather descriptive information on the drug distribution system in each country, including criteria for drug classification, the classification of the 14 drugs, requirements for pharmacist counseling, and liability issues. To obtain more in-depth information about the systems and experiences of particular countries, we traveled to Australia, Canada, Germany, the Netherlands, Switzerland, and the United Kingdom. We chose these countries because each allows the sale of some drugs outside pharmacies. The extensiveness of this general sale class varies greatly between countries; however, it was important to assess the experiences of countries where at least some drugs are available in the same manner as in the United States. We met with government officials, industry and pharmacy representatives, and other individuals knowledgeable about drug distribution in each country. The trips also allowed us to gather the views of a wider range of people than we contacted by mail, such as consumer groups, physicians’ associations, drug manufacturers, and academics. We also visited officials in Brussels, Belgium, to understand the rationale behind the decisions of the European Union regarding drug distribution in the member countries. We conducted our evaluation between February 1993 and December 1994 in accordance with generally accepted government auditing standards. Although studies have examined individual drug distribution systems, we found that little effort has been made to systematically compare systems. Our study brings together information about the drug distribution systems in 11 countries (including the United States), Ontario, Canada, and the European Union. In addition to describing the systems, we examine the accessibility of nonprescription drugs in the study countries and the United States, describe the role of pharmacists in the countries, and assess evidence for implementing a class of nonprescription drugs available only from pharmacies (or personally from a pharmacist) in the United States. This information allows the assessment of the operation of a pharmacist or pharmacy class of drugs in the study countries as well as raises issues that would have to be addressed if such a class of drugs were considered in the United States. One important difference between the United States and the other countries limits the lessons that can be learned. In all the countries other than the United States, there is some government provision of health care to the general public or universal health insurance through the private sector but regulated by the government. Thus, the context in which drugs are acquired, sold, and paid for can be quite different in these countries from that in the United States. If the barriers to obtaining a prescription drug in these countries are smaller than in the United States because individuals do not directly pay for physician visits and drugs prescribed in them, there may be less incentive there to purchase nonprescription products. Another limitation is that the available data did not allow us to directly assess the effect of a pharmacy or pharmacist class on adverse drug effects, quality of care, and cost of drugs to the consumer and health care system. Instead, we had to rely on the assessments of government officials, association representatives, and other experts in each country. We also did not examine in great detail the individual drug classification decisions made in each country. That is, we did not examine the documentation that supports particular classification decisions to assess how decisionmaking varies between countries. Additionally, because of cost and resource limitations, we did not visit every country included in the study. (We did not travel to Denmark, France, Italy, and Sweden.) Finally, because our focus is on the experiences of other countries and what can be learned from them, we did not assess the principal reason FDA has given for not establishing an intermediate class of drugs—namely, that a public health need for such a class in the United States has not been demonstrated. Consequently, we did not address issues such as the frequency of adverse effects for nonprescription drugs in general and, more specifically, for recently switched drugs in the United States. Officials from FDA reviewed a draft of this report and provided written comments (see appendix VI). They stated that the report does not consider certain additional requirements establishing an intermediate class of drugs would impose upon FDA, drug manufacturers, or pharmacists such as new FDA labeling requirements and additional training of pharmacists. The report discusses other potential additional requirements for pharmacists in chapters 2, 3, and 5. However, we did not attempt to address all additional requirements because a comprehensive assessment was beyond the scope and objectives of our report. An assessment of the additional requirements for FDA and drug manufacturers was also beyond our scope. The following chapters address each of the five evaluation questions. Chapter 2 summarizes studies that have assessed the effects of different drug distribution systems and describes the drug distribution systems in the 10 countries as well as officials’ views on the operation of their systems. It also describes the system in the European Union. Chapter 3 presents information on access to nonprescription drugs in the study countries, including the classification of the 14 drugs. Chapter 4 summarizes the role of pharmacists in each country and examines studies of pharmacists’ behavior in the study countries and the United States. Chapter 5 examines the U.S. experience, focusing on Florida with its Pharmacist Self-Care Consultant Law. Chapter 6 summarizes our findings and presents conclusions. Drug distribution systems differ from country to country. In this chapter, we summarize information from studies on the consequences of the different systems. To show how the United States differs, we describe the drug distribution systems for the 10 countries and the European Union. Our purpose is to identify the countries that have a pharmacist or pharmacy class of drugs and examine possible benefits that the United States does not receive because they have such a class and the United States does not. Specifically, we answer the following questions: 1. What conclusions can be drawn from studies or reports on the development, operation, and consequences of different drug distribution systems? 2. What is the structure of the drug distribution system in each country? 3. What are the criteria for the initial classification, and subsequent classification changes, of a given drug product in each country? 4. To what extent is the pharmacist or pharmacy drug class used as a transition class for drugs being moved from prescription to general sale? 5. How effective is a pharmacist or pharmacy class in preventing the abuse of drugs? 6. What is the effect on expenditures on a drug when the drug is switched from prescription to nonprescription status? 7. What drug distribution system will be implemented in the European Union? Little or no analysis has been done to show the advantages and disadvantages of different drug distribution systems. For example, as of March 1995, researchers had not attempted to determine how differences in drug distribution systems may affect health care costs. A number of studies have found significant differences in prescription drug prices across countries, both at the retail and manufacturers’ level. However, as the costs of production and distribution make up only a small share of the total cost of any prescription drug, it is unlikely that differences in distribution systems are major sources of country-by-country differences in drugs prices (GAO, 1994a, p. 29). The effect of different drug distribution systems on nonprescription drug prices has not been assessed. Similarly, no studies have attempted to link the type of drug distribution system in a country to the frequency of adverse drug reactions or have attempted to relate different drug distribution systems to the quality of health care. The studies that have been done focus on the experiences of a single country when switching specific drugs and do not attempt to assess the merits of alternative drug distribution systems (Andersen and Schou, 1993; Bytzer, Hansen, and Schaffalitzky de Muckadell, 1991; Halpern, Fitzpatrick, and Volans, 1993; Hansen, Bytzer, and Schaffalitzky de Muckadell, 1991; Hopf, 1989; Perry, Streete, and Volans, 1987; Ryan and Yule, 1990; and Temin, 1992). While some researchers have found health and economic benefits to switching specific drugs in a particular country, no attempt has been made to determine what the effects would have been under a different drug classification system. For instance, would cough and cold remedies have been switched earlier in the United States if an intermediate-drug class had been available? If so, what would the benefits have been? If not, are there costs (for instance, adverse drug reactions) that would have been avoided if they had been switched into an intermediate class? There are also no studies that explicitly attempt to link the drug distribution system with the switching of specific drugs. In sum, it is necessary to examine other data to assess how a new class of drugs in the United States might operate. Table 2.1 summarizes the drug classes in the 10 countries, Ontario, and the United States. Note that in the Netherlands and Switzerland, a distinction is made between pharmacies and drugstores. Pharmacies are run by professionals with university degrees in pharmacy. All nonprescription drugs can be sold in pharmacies and prescriptions can be dispensed. Conversely, in drugstores, the principal “drug expert” is the druggist. Although some training is required to become a druggist, it is not university-based and is not as extensive as that for a pharmacist. In contrast to pharmacies, prescription drugs cannot be dispensed in drugstores, nor can all nonprescription drugs be sold there. In Australia, Canada, and Switzerland, some of or all the power for classifying drugs for distribution rests with the states, provinces, or cantons rather than the national government. For our purposes, it is sufficient to note that drug classification is rather uniform throughout Australia and Switzerland and, therefore, we categorize these systems as being national rather than local. In Canada, since the number of drug classes and classification decisions varies greatly between provinces, we present information on Ontario as well as the national government. As table 2.1 shows, the two-tier system in the United States is unique. All the other countries restrict the sale of at least some nonprescription drugs to pharmacies. France, Italy, and the Netherlands do not allow the sale of any drugs outside pharmacies or drugstores. Although some drugs are available for sale outside pharmacies in Denmark, Germany, Sweden, and Switzerland, this general sale class is quite small. In Australia, Canada (including Ontario), and the United Kingdom, the general sale class is larger than in these 4 countries but smaller than in the United States. The general rationale for restricting the sale of nonprescription drugs is the same in all the countries. Drugs are not typical consumer products. The dangerous aspects to them means they should be treated accordingly. A pharmacist can help provide guidance to patients on the proper use of the drugs and, thereby, reduce the possibility of adverse effects. All 10 countries and the United States generally use a drug’s safety, efficacy, and quality for approving it. Each country then uses related criteria for determining the drug’s distribution class. For instance, among the criteria the United Kingdom uses when switching a drug from prescription to pharmacist class is that the medicine has an acceptable margin of safety during unsupervised use, including safety in overdose or following accidental misdiagnosis. Officials in the United Kingdom also told us that when making classification decisions, they take into account the role that pharmacists are expected to play. Among the criteria Denmark uses is that the drug should be available by prescription for 2 years without problems before it is switched. (A detailed comparison of the specific classification criteria was beyond our scope.) Over the last 15 years, the number of drugs switched from prescription to nonprescription status has increased in the United States. In fact, this is a worldwide trend. Despite arguments for a transition class in the United States, an intermediate class is not frequently used as a transition class in the study countries. It is operative only in Australia (and there was some support for it by government officials in Ontario and by Canadian national pharmacy association officials). In the Australian state of Victoria, after a drug is switched from prescription class to pharmacist class, officials watch for reports of adverse drug effects (they do not actively track users of the drug). If reports do not materialize, they consider switching the drug from pharmacist to pharmacy class. It is important to emphasize that even when the class is considered a transition class, the goal is not to allow the drug to be sold outside pharmacies. One Australian official told us that she could remember only paracetemol (acetaminophen in the United States) being moved into the general sale category. In Canada, although some government and pharmacy officials told us they support the general idea of a transition class, the intermediate class is not generally used in this manner. Some manufacturers’ officials were concerned that drugs could get “stuck” in a transition class. They said that ibuprofen was switched in the provinces in 1989 out of prescription class into pharmacist class, where it was supposed to remain for only a short time, but it still remains there today, 6 years later. More generally, a Canadian official questioned the idea of whether a transition class would allow drugs to be switched from prescription status faster if the data package for switching remained the same. Only by altering the package could the process be made faster: either fewer or shorter tests would be required or drugs would have to be switched before the tests were completed. The same official raised the issue of the usefulness of the data that might be gathered through a transition class. There would be no controls in the studies. The official thought that because of the lack of controls, the studies would provide little useful information. A U.S. manufacturer echoed this idea and stated that FDA responds to randomized, double-blind studies in which the experimental drugs are compared to placebos. (In a double-blind medical experiment, neither the patients nor the persons administering the treatment and recording the results know which subjects are receiving the drug and which are receiving the placebo.) This allows the effectiveness and adverse effects to be accurately assessed. A transition class would not provide this type of study. An official in the United Kingdom stated that, theoretically, new adverse reactions could be found when a drug is switched to a pharmacist or pharmacy class but that, as a practical matter, the adverse-effect profile for a drug is established by the time a drug is switched. In the other countries we visited, the intermediate classes were not transition but permanent. There was no certainty that the drugs would be assessed for reclassification after a period of time. Thus, little helpful information is available from other countries as to whether or how a transition class might speed the switching of drugs. If a transition class is to play a role in speeding approval of a change from prescription to nonprescription status, it must regularly employ a system to track adverse effects. Without this information, the class could not aid FDA in assessing a drug for general sale. Tracking studies would help link drug use (or at least purchases) to adverse effects. They could also give some indication of the pattern of use in the population. Two difficulties with such a recordkeeping requirement are the time burden it places on pharmacists and the likelihood of increased costs. Proponents for an intermediate class of nonprescription drugs argue that limiting the availability of certain drugs to pharmacies would impede abuse. For example, the pharmacist would be expected to intervene if a customer wanted to purchase inordinate amounts of a drug (either at one time or over a period of time) or if the customer appeared to have no medical need for it. The class could be used in two ways. First, for drugs being switched from prescription to nonprescription status, abuse could be studied and a decision made at a later time on appropriate classification. Second, nonprescription products that were being abused could be moved back to the intermediate class for some safeguards. The advantage of moving a drug from general sale to an intermediate class is that it would still be available to customers for legitimate uses. Although access would be restricted to pharmacies, the added impediment of a prescription would not be required. Currently, if access is to be restricted, the drug must be moved to prescription class. The usefulness of an intermediate class to prevent drug abuse has not been demonstrated. We identified no studies that addressed the general issue of using an intermediate class to deter drug abuse. Few government and pharmacy officials whom we spoke with in the United States and abroad thought that an intermediate class would be completely successful in doing so. They agreed that it would be quite easy for an individual who wanted a large amount of a drug simply to visit several pharmacies and buy what appears to be a reasonable amount in each one, thereby avoiding potential surveillance. Having to deal with a pharmacist might be an impediment, as would the necessity of visiting several pharmacies; however, it would not be overly difficult to get around the system. The difficulties in using a pharmacist class to prevent abuse can be illustrated by experience in New South Wales, where Australian truck drivers were taking ephedrine to try to stay awake. At the time, the drug was restricted to sale by pharmacists. New South Wales officials decided to move the drug back to prescription status, and eventually the other Australian states followed their lead. In this case, since restricting the sale of ephedrine to pharmacists did not prevent abuse, officials thought it necessary to put tighter controls on the product. Similarly, a study in Germany indicated the difficulty of preventing the sale of nonprescription drugs even when they are restricted to pharmacies (Product Testing Foundation, 1991). Children between the ages of 10 and 14 were sent to pharmacies to see how easily they could purchase nonprescription medications containing alcohol. In all 54 pharmacies the children visited, they were allowed to purchase the drugs. In only one case was the purchaser questioned intensively. The consumer association that did the study criticized the pharmacists, and the pharmacy association called the results “lamentable.” Much of the discussion about the proposed roles for an intermediate drug class has centered on public health issues. For example, a primary concern has been the effect of an intermediate class on consumers’ ability to use pharmaceuticals safely and effectively. In addition, an intermediate class of drugs would also have an economic effect. Establishing a pharmacy or pharmacist class could affect the price and availability of drugs to consumers and might also alter the revenues or profits of both manufacturers and retailers. Pharmacy experts in the United States told us that drugs cost less as nonprescription than prescription medicines, although initially the nonprescription cost may be higher than was the prescription price. Ibuprofen is an example. However, the experiences of other countries do not clarify what the economic effect of establishing an intermediate class of drugs would be in the United States. The few studies that have been done focus on the switching of particular drugs in particular countries. The studies do not generalize beyond the study country and do not attempt to determine the effect of the presence or absence of a pharmacist or pharmacy class. Ryan and Yule (1990), examining the economic benefits of switching loperamide (an antidiarrheal) and topical hydrocortisone from “prescription only medicines” to “pharmacy medicines” in the United Kingdom, found that the costs of obtaining each drug decreased after the products were switched. However, in the United Kingdom (and all the study countries), prescription drug prices are controlled in some manner by the government. Nonprescription drug prices generally are not, although some are controlled if the drugs are purchased with a prescription. Therefore, a comparison of drug prices before and after a switch is not a comparison of two free markets. Because there is no U.S. government price control, a comparison of drug prices in the study countries before and after switching would not yield useful insights for the United States. (Thus, the Ryan and Yule findings do not necessarily indicate what would occur in the United States if a drug were switched to an intermediate class.) When Temin (1992) studied the costs and benefits of switching cough-and-cold medicines in the United States, he found that visits to doctors for common colds fell by 110,000 per year (from 4.4 million) from 1976 to 1989, coinciding with the switching of the medicines. After ruling out other possibilities, he concluded that the decrease in physicians’ visits was attributable to the switching of these drugs. He estimated this to be a saving of $70 million per year. Although there is thus some evidence of cost savings from switching drugs, the effect of an intermediate class of drugs has not been assessed. Ryan and Yule did not assess what the savings would have been if loperamide and topical hydrocortisone had been sold outside pharmacies. Temin did not study how the savings would have been different if cough-and-cold medications had been restricted to sale by pharmacists.Therefore, while the studies do indicate potential savings from switching drugs, we cannot use them to assess empirically the relative savings from different drug distribution systems. Our interviews with officials in the study countries indicated that the cost savings from fewer physicians’ visits may not be as great as expected. They said that many patients do not pay the full price for a prescribed drug. For instance, an insured patient might have only a $5.00 copayment for a prescription drug while having to pay the full price for a nonprescription product. Patients might thus have an incentive to go to doctors for a prescription. It could be for either a different but therapeutically equivalent product or the original drug if insurance covers it. The latter has occurred in Denmark with antiulcer medications that were switched in 1989. Bytzer, Hansen, and Schaffalitzky de Muckadell (1991) estimated that only 3 percent of the sales of cimetidine and ranitidine were made without some medical assessment or control. In Germany, approximately half of nonprescription drug sales are prescribed and reimbursed. A somewhat similar situation exists in the Netherlands with respect to acetaminophen. This drug can be purchased without a prescription as a general pain reliever; however, it is also commonly used as a pain killer for cancer patients and, in fact, is the most prescribed drug in the country. When it is prescribed, it is reimbursable. An official told us that this results in consumers being able to get their headache remedy free of charge. The economic effects of an intermediate class of drugs depend on several different factors and the current literature does not provide a comprehensive analysis of them. A complete treatment of economic issues was outside our scope. In the remainder of this section, however, we briefly illustrate some of the unresolved economic issues in assessing proposals for an intermediate class of drugs. The economic effect of an intermediate class of drugs would largely depend on how this class were structured and used—that is, whether it was a transition or a permanent class and, if the latter, whether the drugs in this class were coming largely from the prescription or the nonprescription category. For example, if drugs were moved to pharmacy or pharmacist class from prescription status, then the drug choices available to consumers without a prescription would increase. However, if drugs were largely moved to the intermediate class from the general sale category, then these drugs would be less widely available to consumers because fewer retail outlets could sell them (although they would still be available without a prescription). A major unresolved question is how the availability of a pharmacist or pharmacy class would affect pharmaceutical prices. Depending on the structure of the new class, several factors might strengthen or soften its effect. The following four examples provide an illustrative, but not comprehensive, list of scenarios that could play out if the United States adopted an intermediate class of drugs. The availability of an intermediate class of drugs might prompt a change in manufacturers’ pricing patterns. For example, if the introduction of an intermediate class permitted a drug to be switched from prescription status, the price might decline. If drugs were switched from general sale to the intermediate class, they would be available in fewer retail outlets. It is possible that the decrease in the number of retailers selling these drugs could adversely affect retail competition and, as a result, drive up prices. However, the availability of mail-order pharmacies and other outlets (provided they sold the drugs in the intermediate class), and the likelihood of new pharmacies opening, could mitigate or eliminate this effect. If drugs were moved to the intermediate class from the general sale category, the greater role of pharmacists might lead to higher prices if a counseling fee were implemented. The effect of an intermediate class of drugs on consumers’ out-of-pocket drug expenses would depend on the behavior of third-party payers such as health insurers, which often pay all or most of the cost of prescription drugs but generally do not pay for over-the-counter products. If insurers elected not to reimburse consumers for drugs that were moved from prescription status to an intermediate class, consumers’ out-of-pocket expenditures would increase. However, if fewer drugs were reimbursed, health insurance costs might decrease and partially or fully offset consumers’ greater out-of-pocket drug expenditures. An intermediate class of drugs could also produce savings in other health care costs. The cost of obtaining a prescription drug includes not only the cost of the drug itself but also the cost of the visit to a physician. Patients would be saved the cost of the visit to the physician for a pharmacy- or pharmacist-class drug. While this is potentially true for new prescriptions, cost savings for refilling prescriptions is less clear, since refills are often ordered on the telephone. The 15 member countries of the European Union are moving toward the creation of a single international market, without barriers to the free movement of goods, services, persons, or capital. One aspect of this is the harmonization of requirements governing the manufacturing and marketing of pharmaceuticals.Regulatory authority rests with Directorate General III “Industry.” Section III-E-3 deals with pharmaceutical products. Decisions of the European Union must be approved by a vote of the member countries. “Medicinal products shall be subject to medical prescription where they: —are likely to present a danger either directly or indirectly, even when used correctly, if utilized without medical supervision, or —are frequently and to a very wide extent used incorrectly, and as a result are likely to present a direct or indirect danger to human health, or —contain substances or preparations thereof the activity and/or side effects of which require further investigation, or —are normally prescribed by a doctor to be established parenterally.” The directive goes on to state that “Medicinal products not subject to prescription shall be those which do not meet the criteria established in Article 3.” Despite this directive, the member countries will retain the authority for classifying drugs into prescription and nonprescription classes. This power will not be transferred to the European Union. Nonetheless, the expectation is that because of the EU classification criteria, drugs will be increasingly classified as prescription or nonprescription throughout the union. It is expected that classification into prescription and nonprescription classes will become harmonized throughout the European Union in the next 15 to 20 years. However, the European Union has decided not to impose a particular drug distribution system on the member countries. It will be up to each country to determine the number and nature of nonprescription drug classes in it. If a country decides that it wants to restrict the sale of nonprescription drugs to pharmacies, this will be allowed. Similarly, if a country wants to allow the sale of some or all nonprescription drugs outside pharmacies, it may do so. Thus, despite the European Union’s developing criteria to distinguish prescription from nonprescription products, member countries can have more than two drug distribution classes. An EU official with major responsibilities for and involvement in the directive told us that the reason the European Union decided not to require a particular drug distribution system was that sufficient evidence did not exist to recommend one system over another. EU officials were not convinced that restricting drug sales to pharmacies was a commercial barrier to trade. Conversely, they were not convinced that allowing the sale of drugs outside pharmacies would increase health concerns. We were told that as long as a country’s requirements are the same for both domestic and foreign entities, the European Union will accept its drug distribution system. Drug distribution systems are seen, in part, as a function of tradition. Member countries were unwilling to give up their current systems. In general, the northern European countries are less restrictive on the sale of medications than are the southern countries. The northern countries did not want to restrict the sale of all nonprescription drugs to pharmacies, while the southern countries did not want to allow their sale outside pharmacies. In the absence of sound evidence to support one system as superior to the other, the European Union decided to allow the countries to determine their own individual systems. While there will be no required changes in the number and type of drug classes in a country, officials in the Netherlands told us that they are planning to adapt to EU guidelines by moving from a three-tier to a two-tier system. Their plan is to combine the pharmacist and drugstore classes into one class and allow the drugs to be sold in both locations. It was noted that some nonprescription drugs currently restricted to sale by pharmacists will be moved back to prescription status. Officials of the Netherlands indicated that they perceived no major, consistent benefit from requiring that a large category of nonprescription drugs be available only from pharmacists. Proponents of an intermediate drug class argue that access to pharmaceuticals would increase in the United States if an additional nonprescription drug class, either fixed or transition, were established. Opponents argue that access would decrease. The actual change in access would depend on how the intermediate class were used. In general, access would decrease if (1) drugs that are currently available without a prescription were to be moved into the intermediate class or (2) drugs that would have been switched to general sale were instead placed into the intermediate class. However, access would increase if (1) drugs that would have been moved back to prescription status were placed in the intermediate category or (2) the effect of an intermediate class were to allow drugs to move into it that could not be moved into the general sale class. While the number of outlets (54,000 pharmacies) selling the product would not change, accessibility would increase because a prescription would not be necessary. Beyond these general observations, it is unclear exactly how access would change. No studies have assessed this issue and, moreover, it would be very difficult to do so. A complete understanding of how access would be affected would require assessing a number of factors, including the number of drug outlets that would sell the drugs, how the class would be used, and the number and nature of drugs that would be placed in it. None of these can be precisely predicted. In this chapter, we report our comparison of access to nonprescription drugs in the United States with that in the study countries. Making this comparison helped us understand what the effects of an intermediate class might be in the United States regardless of whether a fixed or transition class were established. We focused on the following three aspects of access: the number of community pharmacies and drugstores in each country, the availability of nonprescription drugs by self-selection, and, more generally, the classification of particular drugs as either prescription or nonprescription products. In particular, we answer the following questions: 1. How many pharmacies and drugstores are there in each of the study countries and the United States? 2. In the study countries, can consumers select nonprescription drugs themselves, or must they request such drugs from a pharmacist? 3. How does the classification of the 14 drugs we selected vary between the study countries and the United States? The drugs include a number of pain relievers, antiulcer medications, and allergy medicines (see appendix IV). Their classification varies from country to country, and all have been either switched or mentioned as candidates for switching in the United States or another country. Private sector officials in the United States indicated that the 14 are a good list for getting a general indication of the access to nonprescription drugs in a country. However, it is not possible to generalize from this list about drug classification in a country—that is, the classification of these drugs does not necessarily indicate the overall availability of nonprescription drugs in a country. Instead, the drugs should be viewed as examples of differences between countries. The number of community pharmacies can give some indication of how available intermediate-class drugs would be in the United States. However, there are a number of other drug outlets that could increase the availability of these products, including government, managed care, and mail-order pharmacies. If these outlets were to sell intermediate class drugs, consumers would not have to go to a community pharmacy to purchase them. However, we cannot be certain that all or any of these potential outlets would choose to sell the drugs. Thus, any analysis of how accessible intermediate class drugs would be is limited by uncertainty over the number of outlets. Similarly, any comparison between countries of the number of drug outlets must note that in some countries physicians are permitted to dispense drugs where there is no convenient pharmacy. For example, in France and Italy physicians are allowed to dispense nonprescription drugs in rural areas where no pharmacy is available. The effect is to increase the number of drug outlets for nonprescription drugs and, hence, their accessibility. For countries that have more specialized drug outlets than the United States, physicians’ dispensing would increase the difference between the countries while narrowing the difference for countries that have fewer pharmacies than the United States. If the United States were to allow physicians to dispense intermediate-class drugs where no pharmacy was available, this would also reduce inconvenience but negate one rationale (not having to visit a physician to receive the drug) for such a class of drugs. To get some indication of how many U.S. outlets would be able to sell these drugs and how similar this is to other countries, we compared the number of community pharmacies per capita in the United States with a comparable measure in other countries. We found that the United States has considerably fewer community pharmacies or drugstores per capita than 6 of the countries. (See table 3.1.) However, only Denmark, with one pharmacy for every 17,500 residents, and Sweden, with one for every 10,200 residents, have substantially fewer pharmacies per capita than the United States, which has one for every 4,800 residents. The United Kingdom, Canada, and Ontario have a similar number per capita to the United States. This gives some indication that restricting nonprescription drugs to sale in pharmacies might be more of an inconvenience in the United States than it is in 6 of the countries we studied. If drugs in the intermediate class were to come from the general sale rather than prescription class, change in access to these products would depend on not only the number of community pharmacies but also their distribution. In some parts of the country, the nearest pharmacy can be 100 or more miles away. Even within a city, the number of pharmacies varies between neighborhoods, and nonpharmacy drug outlets generally sell fewer products than do pharmacies. Therefore, people with a nearby pharmacy already have an advantage in the number of nonprescription products readily available to them. Moving drugs from the general sale class to an intermediate class could make this difference somewhat larger. The number of outlets selling the drugs would decrease, and individuals with easy access to pharmacies would find these drugs readily available to them while those without accessible pharmacies would not. However, moving drugs from the prescription class to an intermediate class would not change the number of outlets (that is, pharmacies) selling them (assuming noncommunity pharmacies chose to sell the products), and, therefore, the difference in access for individuals with readily available pharmacies and those without would remain the same. It would still be necessary to go to a pharmacy to purchase the drug. The difference would be that a prescription would not be required. Moreover, introducing an intermediate class of drugs in the United States would constitute a large change in nonprescription drug distribution since the more than 690,000 nonpharmacy drug outlets would not be allowed to sell these products. Consumers would have to learn that not all nonprescription drugs could be sold in all retail outlets. Individuals who wanted to purchase a drug in the intermediate class would need to know that it was necessary to purchase the drug at a pharmacy. This would affect all residents, regardless of location. In the United States, nonprescription products (except for controlled substances available without a prescription and insulin) are generally available by self-service—that is, consumers can select their nonprescription products from the shelves personally. Consumers have the power to choose their own nonprescription drug regimen by comparing different products on such items as dosing, side-effects, and price. Of course, if they are in a pharmacy, they can always ask the pharmacist for information or advice. In other countries, self-selection of pharmaceuticals is limited to certain drug classes or not allowed at all. Table 3.2 summarizes the direct availability of nonprescription drugs to consumers. The table shows that the ability to choose one’s own drugs is limited, except for drugs available outside pharmacies (in countries where this is allowed). Only in Australia, Canada (as determined by the individual provinces such as Ontario), and Sweden is self-service allowed for some or all pharmacist or pharmacy drugs. If the United States were to follow the general pattern in other countries of not permitting self-service for pharmacist- or pharmacy-class products (as is done for controlled substances available without a prescription and insulin in some states), purchasing these products would be much different from purchasing other nonprescription drugs. Consumers would not only be unable to buy the products in outlets such as convenience stores and gas stations but would also find it more difficult to compare products if they could not select pharmacist- or pharmacy-class drugs directly from the shelf. One of the principal benefits cited by proponents of an intermediate class is that the number of products available without a prescription would increase because FDA would have the option of putting drugs in a class that provides for consumer counseling (National Consumers League, 1991). To see if there is a pattern of greater nonprescription availability in countries that have a pharmacist or pharmacy class, we examined the classification of 14 drugs in the study countries and the United States. (See appendixes III and IV.) These drugs have either been switched or mentioned as candidates for switching in the United States or in another country, but they are only examples meant to illustrate differences between the countries. It is not possible to generalize from them to the entire drug classification system in a country. Our analysis shows only that the presence or absence of a pharmacist or pharmacy class has no consistent effect on drug classification. It is unclear what effect establishing an intermediate class in the United States would have on the classification of drugs as prescription or nonprescription products. Specific examples illustrate how classification varies between countries. Ibuprofen is available for general sale in the United States but, although it is a nonprescription product in 10 other countries, its distribution is limited to specialized drug outlets in all of them. Naproxen is also available for general sale in the United States but as a nonprescription product in only 2 of the 10 countries. For these two drugs, the United States clearly has the most open system and the lack of an intermediate class has not prevented their being switched here. In fact, the United States was among the first to classify ibuprofen (1984) and naproxen (1994) as nonprescription products. However, for other drugs the United States is more restrictive. For the 10 countries studied, only France, Italy, and Sweden, like the United States, do not allow nonprescription sale of the antihistamine terfenadine. Similarly, only Germany, Sweden, and the United States do not allow the nonprescription sale of promethazine, another antihistamine. In the seven countries, these drugs are available in either a pharmacist or pharmacy class; the U.S. system is less open than theirs are. It is unclear whether the theoretical safeguards associated with a pharmacist or pharmacy class would be sufficient for regulators to switch these drugs from prescription to nonprescription. It is thus unclear whether establishing an intermediate class of drugs in the United States would allow more drugs to be switched, since the United States already classifies some drugs as nonprescription that other countries that have a pharmacist or pharmacy class still restrict to prescription class. What is clear is that other factors in addition to or instead of the existence of a pharmacist or pharmacy class account for differences in drug classification between the study countries. An assessment of the relative openness of the current drug distribution system in the United States compared to the other countries studied depends on one’s definition of “access.” If access is defined by the availability of drugs for general sale, the United States appears to have the most open system, since more of the 14 drugs are available for sale outside pharmacies than in any of the other countries. However, if access is defined by the availability of drugs for nonprescription sale regardless of where they can be sold, the United States falls somewhere in the middle. Some countries have more of the 14 drugs available without a prescription than the United States does, but others have fewer. Many of the theoretical benefits associated with a pharmacist or pharmacy class of drugs (whether a fixed or transition class) involve improving drug use or, conversely, reducing misuse. The assumption is that pharmacists will pass on to consumers the information they need to take a drug properly. Critics of an intermediate class in the United States do not question the potential value of pharmacists’ relaying information to consumers but do not believe that it is necessary to have an additional drug class to do this. In this chapter, we describe the role pharmacists play in monitoring the use of pharmacist- and pharmacy-class drugs in the study countries. We focus on pharmacist practices that would have to be engaged in for a fixed or transition class to be effective. We also report on selected aspects of pharmacy practice in the United States, including counseling on nonprescription drugs. Specifically, we answer the following questions: 1. Are pharmacists in the 10 countries required by law to counsel consumers on the proper use of nonprescription drugs? 2. What are the legal sanctions for failing to provide counseling? 3. What studies show whether pharmacists in the study countries and the United States counsel purchasers of nonprescription drugs, and what is the quality of that counseling? 4. What are the requirements and practices of pharmacists in monitoring adverse drug reactions and maintaining patient profiles? 5. How might recent developments in the practice of pharmacy affect the counseling behavior of pharmacists in the United States? One reason proponents commonly give for limiting nonprescription drugs to sale in pharmacies (even if no counseling is required) is that it allows customers to ask for advice if they want it. Table 4.1 summarizes the counseling requirements for nonprescription drugs in the 10 study countries and Ontario. Only in Australia, Denmark, Germany, and Italy are pharmacists required to provide information to patients on the use of nonprescription drugs. In Australia, these requirements vary by state: some states require counseling on pharmacist class drugs but others do not. For instance, in Victoria, the pharmacist is required to speak with the patient every time a pharmacist-class drug is sold. In Denmark, Germany, and Italy, the pharmacist is required to provide information to patients on their medications; however, there are no specific counseling requirements. In Ontario and the United Kingdom, nothing is required beyond the pharmacists’ supervision of sales. In France, the Netherlands, and Switzerland, pharmacists need merely be physically present on the premises of the pharmacy. In Sweden, while the pharmacist is expected to promote proper drug usage, there is no requirement that a pharmacist be present when a nonprescription drug is sold. There are no national counseling requirements in Canada. In the 6 countries we visited—Australia, Canada, Germany, the Netherlands, Switzerland, and the United Kingdom—and Ontario, there is some enforcement of the requirements for pharmacists selling nonprescription drugs, but it is somewhat limited. Enforcement is sometimes by a professional association and is sometimes focused on physical aspects of the pharmacy rather than the counseling of patients. The number of inspectors is sometimes small and nonprescription drugs can be less emphasized than prescription products. Counseling requirements are set by the states in Australia. Officials in the state of Victoria told us that enforcement is done primarily through three pharmacy inspectors of the Pharmacy Board of Victoria on the basis of professional standards. One reason for the board’s enforcing the law rather than the state is that the board’s standard of proof is less stringent, thereby making it easier to discipline recalcitrant pharmacists. The state standard of proof, “beyond a reasonable doubt,” has been replaced by the less strict “balance of probabilities.” The pharmacy board brings its case before pharmacy representatives who may impose penalties ranging from letters of admonition and fines to temporary suspension or permanent cancellation of a pharmacist’s registration. There are three or four suspensions or cancellations per year. We were told that generally there is not a great deal of enforcement in Australia unless there are complaints or drug abuse concerns. Enforcement of pharmacist requirements is done at the state and regional level in Germany and focuses on the physical aspect of the pharmacy rather than the behavior of pharmacists. Inspectors check such items as cleanliness of the pharmacy, proper storage of medicines, size of the laboratory, availability of instruments, and orderliness of records. In the Netherlands, the State Public Health Inspectorate supervises all matters relating to the sale of drugs. Pharmacists must give access at any time to inspectors to examine the pharmacy and everything in it. If inspectors find that the pharmacy is not operating in accordance with the law, they inform the pharmacist and stipulate a time within which the problem must be corrected. We were unable to determine the amount of effort put forth in identifying violations of counseling requirements for nonprescription products. In Switzerland, each canton has a pharmacist organization that conducts inspections. Inspectors examine the shop and laboratory to determine if they are in accordance with regulations. They also check to see if the pharmacist is present when the pharmacy is open, as required by law. In the United Kingdom, pharmacy medicines are to be sold only under the supervision of a pharmacist. This is normally defined as being present, aware of the transaction, and in a position to intervene. Enforcement of the law is not by the government but by the Royal Pharmaceutical Society of Great Britain. The society has 18 pharmacy inspectors and 2 inspectors for nonpharmacy drug outlets. This works out to about 650 to 700 pharmacies per inspector. We were told that a large number of cases are brought to the attention of the Royal Pharmaceutical Society every year by competitors and consumers. After the society visits the pharmacy to meet with the pharmacist, it decides whether to handle the case informally or to take formal evidence. Often it sends only a warning letter. Approximately 15 cases a year are prosecuted. Additional cases (6 in 1993) are dealt with through the pharmacy code of ethics. However, we were told that the society is unlikely to base action on the sale of pharmacy-class drugs (for instance, selling a pharmacy medicine without appropriate counseling). Overall, Royal Pharmaceutical Society officials thought that a great deal of effort was put into identifying violations of laws and regulations concerning purchases of nonprescription drugs. Government officials told us that enforcement of pharmacy practice requirements is successful mainly as a deterrent. Pharmacists are aware of the law and try to stay within it. In Ontario, pharmacists (or an intern) must make the “decision to sell” a pharmacist-class drug. This is generally defined as the pharmacist’s being “aware of the sale.” There is no requirement that pharmacists actually speak with the patients. Enforcement is done by the Ontario College of Pharmacists, a professional and regulatory association. Officials told us that compliance with the law is minimal. There is no method for monitoring pharmacist interventions other than through consumer complaints to the college, which are then investigated. We asked pharmacy officials in the countries we did not visit how much effort is put forth in enforcing nonprescription drug counseling requirements. Officials in France and Denmark told us that “moderate” effort is put into enforcing counseling requirements in those countries; Swedish officials said that there is “some” effort. In Italy, there are no sanctions against pharmacists who do not counsel patients on the use of nonprescription drugs. Officials noted that the enforcement of counseling requirements can be problematic. It is difficult to determine what is or is not appropriate counseling behavior. Appropriateness needs to be assessed case by case. What appears to be a lack of counseling might reflect a legitimate judgment by the pharmacist, such as that a particular customer regularly uses the drug and does not need counseling on it. This makes enforcement of counseling requirements quite difficult. Various academics, consumer groups, and pharmacy associations have conducted studies of the behavior of pharmacists when they sell nonprescription drugs. Typically, participants in a study go to a pharmacy and attempt to purchase a particular nonprescription product or describe their symptoms (or those of the person for whom they are buying the product), seeking advice from the pharmacist on what drug to purchase. Each shopper has been trained by the investigators to act in accordance with a script developed for the study. The pharmacist’s advice is recorded and compared to what the pharmacist should have done according to criteria determined by a group of experts. We refer to these investigations as trained shopper studies. Other common study designs are investigators’ observation of pharmacists’ behavior and pharmacists’ completion of a questionnaire on their counseling activities. Table 4.2 lists the pharmacist counseling studies, their methodologies, and what they assessed. Studies have not been conducted in all the countries. While the studies vary considerably in design and objective, a number of common themes are evident. Despite differences in the law and regulations across countries, counseling is generally incomplete and infrequent. Estimates of the frequency of pharmacists’ counseling on nonprescription products (that is, the percentage of patients receiving advice) ranged from 11.1 percent in Sweden (Marklund, Karlsson, and Bengtsson, 1990) and 12.3 percent in Canada (Taylor and Suveges, 1992a) to 93.75 percent in Germany (Product Testing Foundation, 1991). Germany’s was by far the highest estimate. The second highest, based on self-reports of pharmacists, was 37.6 percent in the United Kingdom for single proprietor pharmacies (Phelan and Jepson, 1980). (The lowest estimate for the United Kingdom was 21 percent for chain pharmacies, also found by Phelan and Jepson.) However, even in Germany, the researchers generally thought that too little counseling was being done. In one third of the cases in Germany, only one piece of information was being passed to the consumer. An Australian study found that the vast majority of pharmacists thought that they should counsel for both prescription and nonprescription medications (Ortiz et al., 1984b). However, pharmacists gave a number of reasons for not counseling. The three most important were lack of adequate medical histories, lack of feedback from the person counseled, and the belief that counseling may not be necessary. Another reason counseling may not occur is that customers may not want it. In Canada, Taylor and Suveges (1992a) found that 195 of 207 customers who did not receive advice on a nonprescription product indicated that they did not want counseling. The main reasons they gave were that they had “used medicine before with good results” and “had already received advice elsewhere on what to buy.” Regarding the quantity of counseling (that is, the availability of pharmacists to counsel, the number of counseling events per day, and the time spent counseling), a Canadian study found that pharmacists responded to requests from patients for advice on nonprescription drugs an average of 2.8 times a day (Poston, Kennedy, and Waruszynski, 1994). The range between pharmacies was from 0.07 to 38.64 counseling events per day. A study in Australia found that 23 percent of pharmacist counseling activities involved OTC medications (Ortiz, Thomas, and Walker, 1989). (This was the second most frequent counseling activity behind giving advice on prescribed medications.) Patients initiated the counseling in 259 of 438 cases. In 394 of the cases, counseling took 2 minutes or less. The quality of counseling was somewhat mixed. Recommended products and advice (when given) were generally found to be appropriate. Willison and Muzzin (1992) found that in Canada the quality of advice varied by ailment, with patients receiving better advice on less complex problems. For three of four scenarios in which the use of a prescription medication was not involved, the percentage of patients receiving totally safe and appropriate advice ranged from 62 to 77 percent. For the fourth scenario, only 17 percent received such advice. In Germany, the Product Testing Foundation (1991) found that pharmacists’ explanations tended to be accurate for preparations requiring special explanations (for instance, appetite suppressants and iron preparations) and that performance had improved since 1984. There are also examples of inappropriate advice being given. For instance, Goodburn et al. (1991) found that pharmacists in the United Kingdom gave inappropriate advice for the treatment of childhood diarrhea 70 percent of the time. In Germany, Glaeske (1989) found that 61 percent of all nonprescription products sold were ineffective or presented dangers to the uninformed user. In all the countries where studies have been conducted, researchers found that information-gathering and advice were often incomplete (that is, the information given was appropriate but not everything that should have been covered was discussed). In Australia, Feehan (1981) found a lack of information-gathering on patients’ characteristics. For instance, 25 of 43 pharmacists were prepared to sell a weight-reduction product without checking on the patient’s health or to see whether she was taking other medications. Glaeske (1989) reported that in Germany no pharmacist asked all the questions considered to be essential. For instance, not one trained shopper who was a woman was asked if she was pregnant or lactating. Consultation on side effects was unsatisfactory—for example, such simplistic statements as “every medication has side effects” and “there are no side effects” were sometimes made. In a 1991 study, the Consumers Association (1991) of the United Kingdom found that customers were not adequately questioned. Only 10 percent of pharmacists asked the trained shoppers what other medications they were taking. Studies in Australia (Harris et al., 1985), Canada (Willison and Muzzin, 1992), and the United Kingdom (Smith, Salkind, and Jolly, 1990) found a wide range of skills and performance between pharmacists. Feehan (1981) in Australia and Willison and Muzzin (1992) in Canada thought that this could indicate a shortcoming in pharmacists’ education for dealing with patients and that there is a need to strengthen their clinical interviewing skills. Interestingly, Smith, Salkind, and Jolly (1990) in the United Kingdom found that pharmacists’ counseling was either very good or very poor. Few pharmacists were in the middle. The studies generally found that pharmacy practice has improved as more and better counseling has been given. This is so when the same organization collected the same data at different times (Product Testing Foundation, 1984 and 1991) as well as when the results of different studies over time were compared (Willison and Muzzin, 1992). The results of studies in the United States of pharmacist counseling on nonprescription drug use are quite similar to the findings in other countries. However, no studies in the United States have assessed the frequency of pharmacy counseling on these products. Three studies assessed some aspect of the quantity of counseling. In a mail survey, Carroll and Gagnon (1983) found that 96 percent of households said the pharmacist was available to answer questions about nonprescription medications half the time or more. Meade (1992), reporting on a study conducted for APhA, noted that 69 percent of pharmacists said they counsel patients 10 or more times per day on nonprescription products, well within the range reported in Canadian pharmacies. Another survey conducted for APhA (Market Facts, 1994) indicates that pharmacist counseling for nonprescription drugs is increasing. The 1993 National Prescription Buyers Survey found that the percentage of respondents who had ever asked a pharmacist for advice about a nonprescription drug had increased from 37 percent in 1979 to 64 percent in 1993. (There was evidence that interactions with pharmacists for prescription advice had increased as well.) The other U.S. studies in table 4.2 examined the quality of counseling. In the 1960’s and early 1970’s, two studies examined pharmacists’ counseling regarding nonprescription drugs in U.S. pharmacies (Knapp et al., 1969, and Wertheimer, Shefter, and Cooper, 1973). The conclusions of both studies were generally negative. Insufficient inquiries of patients were made, counseling was infrequent, and inappropriate drugs were sold. Jang, Knapp, and Knapp (1975), while finding some positive aspects of pharmacists’ counseling, also had criticisms, including poor performance on drug monitoring and controlling OTC drug use. The Wertheimer, Shefter, and Cooper (1973) study was replicated by Vanderveen and colleagues (Vanderveen, Adams, and Sanborn, 1978; Vanderveen and Jirak, 1990). In the 1978 study, the authors concluded that the pharmacy “profession has not made any great strides in the area of OTC product counseling.” The only question asked by more than one fourth of the pharmacists was the age of the child for whom the medicine was being purchased. The 1990 study found some improvement, with a majority of pharmacists asking about both the age of the child and the duration of the illness. However, no other issue was raised by more than half the pharmacists. The general conclusion was that while pharmacists’ counseling had improved, it could still be better. Barnett, Nykamp, and Hopkins (1992) found that the majority of pharmacists questioned customers before making OTC recommendations and gave directions on their use. For one scenario, an average of 2.81 out of 5 pertinent questions were asked; for a second, an average of 1.58 questions out of 5 were asked. Combining results from the two scenarios, they found that 68.2 percent of product recommendations by pharmacists younger than 30 were appropriate while 42.4 percent by pharmacists 30 and older were appropriate. Overall, the authors concluded in 1992 that pharmacists had made strides in OTC counseling since the earlier studies. In a study of pharmacist counseling for prescription drugs in Wisconsin, where there is a requirement that pharmacists provide appropriate consultation for a prescription, Pitting and Hammel (1983) sent trained shoppers to 84 pharmacies. (The number of trained shoppers and the selection method for the pharmacies was not given.) They found that 61.5 percent of pharmacists did not consult with the patient when a prefabricated drug was dispensed, although 87.5 percent did consult on compounded products. Thus, even when pharmacists were legally required to counsel patients, not all pharmacists did so. The results of the studies in the United States are rather similar to those in countries where the sale of at least some nonprescription drugs is restricted to pharmacies. In general, the theory of pharmacy practice diverges from the reality. The advice of pharmacists is often appropriate but not universally given. In addition, it is often incomplete, with little information being given to customers on such items as possible side effects. In other words, what information is given is accurate, but not enough was passed on to consumers. Researchers consistently found a lack of information-gathering on the part of pharmacists. For instance, information is often not gathered on symptoms and other medications. More positively, within a range of pharmacists’ behavior, many pharmacists do a good job. In addition pharmacists’ performance, while still often deficient, has improved over time. One argument for an intermediate class of drugs is that pharmacists would be in a position to monitor patients for adverse drug reactions to medications in this class. In the case of a transition class, this information could be passed on to FDA and aid in its decision whether to allow the sale of a drug outside pharmacies. However, in Italy and the United Kingdom, adverse drug reaction reports from pharmacists are not accepted. In the other countries, reports from pharmacists are accepted but not required. This is the same as in the United States. Government, pharmacy, and manufacturers’ officials stated that pharmacists rarely submit adverse drug reaction reports. Thus, the experiences of the 10 other countries do not allow us to assess the benefits from or costs of requiring pharmacists to report adverse drug reactions. However, there is some limited information from the United States that suggests that community pharmacists can, at least in some situations, successfully monitor patients for adverse drug reactions. Meade (1994a and b) gives examples of pharmacists who have successfully done this. She reported on a pharmacist in Minnesota who, through consultation with a patient, detected that a prescription drug was causing the patient dizziness, chest pain, and swelling and tingling in the hands. When the prescribing physician took the patient off the drug, the symptoms disappeared. Meade also reported on a pharmacist in Tennessee who discovered from a patient’s reaction to a prescribed drug that the patient had diabetes. One potential role for pharmacists is to record prescription and nonprescription drug sales in patient profiles. This information could help link drug use with adverse drug reactions and other complications. Other uses for profiles would be prospective. For instance, a patient profile could alert a pharmacist to medical conditions that might be affected by a prescribed drug’s side effects. The pharmacist could alert the physician to the problem and, if it were appropriate, the physician or pharmacist could select a different drug without these side effects. Similarly, a profile could alert the pharmacist to possible adverse interactions with other drugs that a patient was currently taking. It is not possible to judge the usefulness of such a procedure for nonprescription products. Only in Australia are pharmacists ever required to include nonprescription drug use in patient profiles. These requirements are determined by the individual Australian states and exist only in certain states and for particular pharmacist-class drugs. The drugs for which sales must be recorded vary from state to state. There are no requirements in any of the states for recording sales of pharmacy-class drugs or drugs available outside pharmacies. Officials in Victoria told us that there has been some difficulty in getting pharmacists to comply with recording requirements. They attributed this to the requirements’ covering too many drugs and, consequently, they have reduced the list of nonprescription drugs for which the sale must be recorded to those for which they believe recording is most important. The situation in Victoria is similar to one in the state of Washington in the United States for prescription drugs. Washington has mandatory regulations governing pharmacy practice that include a requirement that pharmacists maintain and use patient profiles. In a trained shopper study, Campbell et al. (1989) found that 67 percent maintained these profiles. While this was an increase from 54 percent in 1974 (when the law was enacted), it was considerably below the law’s 100 percent. The authors speculated that it was doubtful that maintaining and using patient profiles was significantly greater in Washington than it was in states that did not have the same requirements. In 1987, the National Association of Retail Druggists surveyed pharmacists through the NARD Newsletter (The NARD Survey, 1988). More than 1,300 pharmacists responded. While 92 percent of the pharmacists reported that they maintain patient profiles, only 15 percent said that they record OTC drug sales in them. The views of many of the government officials in the countries we visited (Australia, Canada, Germany, the Netherlands, Switzerland, and the United Kingdom) were consistent with the results of the studies discussed above. There was agreement that pharmacists have done a rather poor job of passing their knowledge on to consumers. Many officials questioned the frequency of pharmacists’ counseling and thought that not enough counseling was being done. Pharmacists were selling drugs and providing little or no advice on their use. Officials gave several possible explanations for this, including time constraints and a lack of counseling skills. Nonetheless, the officials thought that pharmacists had the potential to improve drug use if they passed their knowledge on to patients. There was general agreement that pharmacists are knowledgeable and have a great deal to offer patients on the proper use of medications. This position was held even by those who opposed or questioned the usefulness of restricting the sale of some nonprescription drugs to pharmacies. Pharmacists could ask key questions about other drugs a patient is currently taking and about underlying medical conditions and could monitor compliance and report adverse drug reactions. Professional pharmacist associations in these countries are taking criticisms seriously, and many have initiated various programs to address them. They have instituted continuing education courses to give pharmacists the skills necessary to better perform their counseling role. A number of officials noted that pharmacy education has changed a great deal in the past 10 or so years. There is currently more of an emphasis on clinical pharmacy with its focus on patient service. Pharmacists who received their training before this change are often described as not having the “people skills” to be good counselors. In this section, we briefly describe some recent developments in the practice of pharmacy that are relevant to our assessment of an intermediate class of drugs. Our purpose is not to evaluate these changes but to make the reader aware of them. The idea of pharmaceutical care constitutes a major change in the practice of pharmacy. It moves pharmacists away from their traditional role of dispensing drug products and involves them more in selecting and monitoring drug therapies. The idea has been advocated in the United States by academics in university-based pharmacy schools and pharmacy organizations and has spread to other countries (the initiatives mentioned above have often been undertaken under the name of pharmaceutical care). Hepler defined pharmaceutical care as “the responsible provision of drug therapy for the purpose of achieving definite outcomes that increase a patient’s quality of life” (1991, pp. 141-42). It involves “designing, implementing, and monitoring a therapeutic plan, in cooperation with the patient and other health professionals, that will produce specific therapeutic outcomes” (Klein-Schwartz and Hoopes, 1993, p. 11). The proponents of pharmaceutical care point to various studies (most of them in institutional settings where complete patient information exists) that show the benefits that pharmacists can have on health care. For instance, one hospital study showed shorter length of stay, smaller total cost per admission, and smaller pharmacy cost per admission for patients who received either of two programs involving pharmaceutical care (Clapham et al., 1988). In another study, elderly apartment residents were instructed in drug use and given access to drug counseling by pharmacists (Hammarlund, Ostrom, and Kethley, 1985). After 1 year, the residents who initially had the greatest number of medication problems (and were available for follow-up interviews) were found to have had an 11-percent decrease in the number of prescriptions taken and a 39-percent decrease in the number of medication problems. There is some evidence of the value of pharmaceutical care in community pharmacies. McKenney et al. (1973) examined the effect of a clinical pharmacist’s counseling hypertensive patients in three community pharmacies. Throughout the study, the pharmacist maintained close contact with the patients’ physicians. The patients who received the counseling were more likely than those who did not receive it to show an increased knowledge of hypertension and its treatment, comply more often with their prescribed therapy, and maintain their blood pressure within the normal range. In a later study, pharmacists in six community pharmacies in Virginia were trained to provide similar services (McKenney et al., 1978). Results showed improved compliance and better blood pressure control in patients receiving counseling than in those not receiving it. Pharmacists also detected 38 instances of adverse drug reactions. Rupp (1992) estimated the value of community pharmacists intervening to correct prescribing errors. Of 33,011 prescriptions that were examined, 623 (1.9 percent) were found to be problematic. The estimated value of these interventions was $76,615. Nichols et al. (1992) examined the effect of counseling on nonprescription drug purchasing decisions. They found that 25.4 percent of patients purchased a different product than they had intended after receiving counseling, 13.4 percent did not purchase a drug, and 1.3 percent were referred to their physician. However, the study did not measure the importance of these decisions (for instance, how much of an improvement was brought by changing medications). More research is being conducted on the effect of pharmaceutical care in community pharmacies. Studies are focusing on the effect of drug use reviews by pharmacists, the use of protocols by pharmacists in managing and monitoring diseases, and a pharmaceutical care program for pediatric and adolescent patients with asthma. In addition, there appears to be at least some movement among community pharmacists to implement pharmaceutical care. Training courses are offered on how to implement pharmaceutical care (Martin, 1994) and articles have been written on pharmacies where it has been established (Meade, 1994a and b). For our purposes, it is important to note that the methods and goals of pharmaceutical care are consistent with those of an intermediate drug class. The general idea of both is that pharmacists would be more involved in a patient’s drug therapy by such actions as consulting with patients. The evaluation of pharmaceutical care in community pharmacies would give some indication of the potential value of a greater role for pharmacists and, consequently, would provide some information on the value of an intermediate class of drugs. However, even if a positive value were established, or at least indicated, a number of the difficulties we have identified in this report would still have to be addressed. For nonprescription drugs, pharmacists would need to counsel patients, monitor and report adverse drug reactions, refer patients to physicians when necessary, and perform many other activities. This has not been the norm. Other issues would also need to be addressed. For instance, pharmaceutical care can take a great deal of time. Pharmacists would probably have to delegate more responsibility to technicians. The appropriate role for technicians would have to be determined. Pharmacists’ compensation for pharmaceutical care activities may be especially important. Many pharmacies now charge a fee for pharmaceutical care services. (Some pharmacies have different fees depending on the level of services offered.) However, some insurance companies have been reluctant to pay for the services (Martin, 1994). It should be clear that pharmaceutical care regarding nonprescription drugs can be given without an intermediate class of drugs. When and if pharmaceutical care is established in community pharmacies, the need for an intermediate class will still need to be established. It will still be unclear what benefits would accrue from establishing such a class of drugs. Arguments such as we hear now will still be heard (for instance, more drugs would be switched and health care costs would be reduced). The difference would be that, at least in some areas, pharmacists would be doing what is necessary for an intermediate-drug class to be successful. How much, if anything, would be gained by establishing an intermediate class of drugs, even under these circumstances, is unclear. Within the Omnibus Budget Reconciliation Act of 1990 are new requirements for the practice of pharmacy that went into effect on January 1, 1993, and that mandate prospective drug use reviews, counseling of patients, and maintenance of patient profiles for Medicaid recipients. Although these requirements cover only Medicaid beneficiaries, most (44) state boards of pharmacy have extended them to cover other patients receiving prescriptions. The goal, of course, is to improve health care through helping patients understand and follow medication directions better. Success is being evaluated by several studies funded by the Health Care Financing Administration. The applicable regulations require prospective drug use reviews before each Medicaid prescription is filled. Prescriptions are to be screened for potential problems from therapeutic duplication, drug-disease interactions, drug-drug interactions, incorrect dosage or duration of treatment, drug-allergy interactions, and clinical abuse or misuse. The pharmacist is to intervene, if necessary, before the prescription is dispensed. Additionally, in drug use reviews pharmacists must offer to counsel patients about their prescription medications. Exact counseling requirements are defined by each state. Information that might be passed on includes the name and description of the medication, the dosage, special directions and precautions, common severe side or adverse effects, interactions, therapeutic contraindications, and proper storage. Pharmacists must also make a “reasonable effort” to obtain, record, and maintain at least the following information: the patient’s name, address, telephone number, date of birth or age, and gender; the patient’s individual history, where significant, including disease states, known allergies and drug reactions, and a comprehensive list of medications and relevant devices; the pharmacist’s comments relevant to the patient’s drug therapy. The reaction of practicing pharmacists to the new requirements has been mixed. Some see it as an opportunity while others are wary. While the law requires pharmacists to perform additional duties, it does not stipulate that they should be compensated for them, despite some pharmacies’ having had to hire new employees and buy new computer software. Pharmacists are also concerned that lawsuits against them will increase. A 1994 survey conducted for the National Association of Boards of Pharmacy found that only 38 percent of all customers stated that someone in the pharmacy offered to have a pharmacist discuss their prescription medications with them. The president of the association stated that the results “clearly indicate that too few patients and caregivers are being counseled on their prescription medications.” However, the same study found that pharmacist counseling is perceived positively by the public. Seventy-one percent of offers to counsel were accepted, and the same percentage of patients thought that counseling was very important. The counseling that was done also appears to have been of a high quality, with 99 percent of respondents believing that the pharmacist had clearly presented the information and with pharmacists telling patients how and how often to use their medications at least 93 percent of the time. A large majority of patients were also told the dosage amount, the name (along with a description) of the medication, how long it should be taken, special directions or precautions, and any side effects. However, less than half of the pharmacists told patients how to monitor the effects of their medications and what they should do in the event of a missed dose. Pharmacists’ liability is becoming a concern throughout the United States.Data from the Chicago Insurance Company show that claims against pharmacists rose 22 percent from 1987 to 1990. Recent court rulings have expanded a pharmacist’s liability under some circumstances. Pharmacists in some states may now be held liable if they fail to instruct a patient about the maximum safe dosage or fail to identify a potential adverse drug interaction for a prescribed drug. (Chapter 5 discusses pharmacists’ liability in prescribing drugs in Florida.) A 1994 ruling by an Arizona appellate court also indicates that pharmacists’ liability might be increasing. According to one source, a majority of court decisions involving pharmacy liability between 1986 and 1994 had concluded that pharmacists generally did not have a responsibility to warn patients of potential adverse effects of their drug regimen. However, in Lasley v. Shrake (880 P.2d 1129 (1994)), the appellate court ruled that pharmacists have a general duty of “reasonable care” that could include a duty to warn. The case was sent back to the trial court to determine what constitutes reasonable care. In addition, some pharmacists have speculated that requirements of the Omnibus Budget Reconciliation Act of 1990 will also increase pharmacists’ potential liability, as could pharmaceutical care. While the United States has essentially only two classes of drugs (prescription and general sale, the latter commonly referred to as OTCs), there are situations in which a pharmacist may supply a prescription drug to a patient without a physician’s prescription and instances in which nonprescription drugs are not available for general sale. These include dispensing a small number of controlled substances (for instance, particular amounts of codeine) regulated under the Controlled Substances Act (Public Law 91-513, title II) and insulin. Similarly, in Florida pharmacists have been given the independent authority to dispense a limited number of prescription drugs without a doctor’s prescription. Federal law requires that prescriptions be dispensed by “practitioners” but allows individual states to determine who is a “practitioner.” In Florida, this group includes pharmacists. Finally, in some states pharmacists have been given dependent prescribing authority—that is, they may prescribe under the supervision of a physician. In this chapter, we describe these situations. The lessons that can be learned from them are relevant for both a fixed and transition class since, as with an intermediate class, pharmacists are expected to do more than simply dispense medications. Under the Controlled Substances Act of 1970, the manufacturing, distribution, and dispensing of controlled substances (that is, psychoactive drugs) is regulated. The act’s purpose, among other things, is to prevent drug abuse and dependence and strengthen law enforcement authority in the field of drug abuse. These drugs are placed into one of five categories (referred to as schedules) based on three criteria: currently accepted medical use, abuse potential, and human safety. Schedule V drugs have the fewest restrictions and may be made available by FDA without a prescription. They are defined as drugs having a low abuse potential relative to drugs or other substances in schedule IV, having a currently accepted medical use in treatment in the United States, and leading to limited physical or psychological dependence when abused relative to drugs or other substances in schedule IV. Schedule V drugs are classified as prescription or nonprescription products as determined under the Durham-Humphrey Amendment to the Federal Food, Drug, and Cosmetic Act of 1938. Some schedule V drugs classified as nonprescription under this act are available without a prescription in some states but not all. However, even when a prescription is not required, schedule V drugs are still available only from a pharmacist. Schedule V products are few. They are the narcotic buprenorphine, the stimulant pyrovalerone, and products containing specific amounts of the narcotics codeine, dihydrocodeine, ethylmorphine, diphenoxylate with atropine sulfate, opium, or difenoxin with atropine sulfate. Larger doses of these products (when available) are in a more restricted schedule. Sellers of schedule V products must follow federal and state requirements. For instance, in Connecticut the seller must keep a record containing “the full name and address of the person purchasing the medicinal preparation, in the handwriting of the purchaser, the name and quantity of the preparation sold and the time and date of sale.” Federal regulations state more generally that the purchaser must be 18 years old or older and furnish suitable identification and that all transactions must be recorded by the dispensing pharmacist. While one purpose of the Controlled Substances Act is to improve public health, the requirements for selling a product differ from what is typically discussed for an intermediate class of drugs. Under the act, the focus is on recordkeeping; in an intermediate class of drugs, activities such as counseling and monitoring patients would be stressed. Nonetheless, the two are somewhat similar in that the pharmacist is involved in the sale and that reducing drug abuse is a goal. Any information on how successful the establishment of schedule V has been in reducing drug abuse would be helpful in evaluating the potential value of an intermediate class of drugs. However, we were unable to locate any studies evaluating the usefulness of schedule V in preventing abuse or monitoring the use of the products.Therefore, while it would be useful to know how successful schedule V has been, we have no data with which to find out. Insulin is also available without a prescription but restricted to dispensing by pharmacists in most states. However, a physician must first determine the patient’s insulin needs and provide instructions for controlling diabetes. As with schedule V products, we located no studies that evaluated the effect of this restriction. The Florida Pharmacist Self-Care Consultant Law (sometimes referred to as the Florida Pharmacist Prescribing Law), which went into effect on October 1, 1985, is unique in the United States. It allows pharmacists to independently prescribe specific categories of medications that under federal law may be dispensed only upon the prescription of a licensed practitioner; in Florida, this includes pharmacists. Perhaps the most important point about the law is that pharmacists are able to independently prescribe medicines—that is, they are not operating under the supervision of a physician. Despite this independence, the law does limit what pharmacists can do. Pharmacists are not allowed to order injectable products, treat a pregnant patient or nursing mother, order more than a 34-day supply of the drug, prescribe refills unless specifically authorized to do so in the formulary, or order and dispense anyplace but in a pharmacy. Pharmacists recommending a drug must advise patients to see a physician if their condition does not improve at the end of the drug regimen. When the law went into effect, there were 35 drugs in the formulary. Since then, 7 drugs have been added, bringing the total to 42. Responsibility for the original list, as well as for adding and deleting drugs, rests with a seven-member committee. The law states that any drug sold as an OTC product under federal law may not be included in the formulary. Among the categories of drugs in the formulary are oral analgesics, antinausea preparations, and antihistamines and decongestants. Under the law, pharmacists are not required to perform the prescribing role. However, if they choose to do so, a number of requirements pertain, including the labeling of products, creating prescriptions, and maintaining patients’ profiles. (More detail on the products in the formulary and the requirements for pharmacists is in appendix V.) In 1990, a group of researchers from the College of Pharmacy at the University of Florida reported on the effect of Florida’s Pharmacist Self-Care Consultant Law during its second and third years of operation (Eng et al., 1990). Four methods were used in the study: a survey of pharmacists, pharmacy audits, shopper visits, and a survey of consumers. The following four subsections summarize the results that are most relevant to our report. In a mail survey of pharmacists, Eng and colleagues found that pharmacists infrequently prescribed drugs from the formulary. Thirty-three percent of community pharmacists had prescribed a drug at least once. Of this group, 60 percent had prescribed less than one drug per month. The principal reasons given for not prescribing were that drugs in the formulary offered no advantages over nonprescription drugs, prescribing increased the risk of liability, and time was too short. Conversely, the main reasons for prescribing were that it helped the patient maximize self-care, used the pharmacist’s knowledge, and saved the patient money. No differences were found between the prescribers and nonprescribers with respect to gender, professional degree, position (for instance, prescription department manager and pharmacy owner), and prescription volume. The study authors did find that pharmacists with fewer years of practice were more likely to prescribe than those with more years of practice, and independent community pharmacists were more likely to prescribe than chain pharmacists. The law requires that if a pharmacist prescribes a drug, the pharmacy must maintain a profile of the patient. Of 19 pharmacies that reported that their pharmacists prescribed drugs, only 9 maintained the required profiles. The audits showed that pharmacists’ prescriptions made up a small proportion of the total number of prescriptions: less than 0.25 percent of all the medications that were prescribed in the 9 pharmacies. These prescriptions were primarily limited to topical pediculicides (lindane shampoo), oral analgesics, and otic (ear) analgesics. These categories constituted 82 percent of all pharmacists’ prescriptions. Trained shoppers found that the quality of the pharmacists’ performance in 21 community pharmacies was high in two areas: (1) following the law’s labeling and quantity limitation requirements and (2) practicing the art of communication. In more than 70 percent of the cases, the shoppers found that the pharmacist was friendly, provided some privacy, and appeared to be interested. However, the pharmacists spent very little time in assessing and responding to medical complaints presented by patients. Less than 17 percent of the 62 pharmacists asked about chronic medical conditions, medication allergies, and current prescription and nonprescription drugs that the patients were taking. Only 5 percent of the pharmacists asked about the onset, duration, and frequency of the medical problem while 13 percent asked if they had tried other medications. In less than 40 percent of the visits, pharmacists provided information on topics such as the number of doses to be taken per day, the duration of the treatment, and side effects. The authors noted that when counseling was provided, the information was generally accurate but incomplete. The performance of the 21 pharmacists in three scenarios was mixed. In a scenario leading to the recommendation of an OTC product, all 21 pharmacists recommended the correct product. However, for a scenario that should have led to referral to a physician, only 1 pharmacist made the referral. In a scenario leading to the pharmacist’s prescribing a product, the patient asked for a specific shampoo that was in the formulary; only 5 pharmacists prescribed it. The four reasons given for not prescribing were that liability insurance did not cover the pharmacist’s prescribing, it is against company policy to prescribe, a prescription is needed, and the particular pharmacist does not prescribe. Consumers in the pharmacies were surveyed to determine their attitudes toward receiving advice from pharmacists. Three principal reasons were given for seeking advice from pharmacists: confidence in their abilities, convenience, and the problem’s not being serious enough to consult a physician. All 149 of the patients who answered the question on how pleased they were with the pharmacist’s actions indicated that they were satisfied. Ninety percent of consumers said that they would follow the pharmacists’ advice regarding seeing their physician or taking a recommended OTC product or pharmacist prescribed drug. A small majority (52.3 percent) also indicated a willingness to pay a fee for a pharmacist’s services if a drug were prescribed by the pharmacist but not if the pharmacist only provided advice, recommended a nonprescription product, or referred the patient to a doctor. Of those willing to pay a separate fee, one third were willing to pay more than $5.00. Officials we met with in Florida invariably thought that the effect of the law had been minimal because few pharmacists were using their prescribing authority. One official who had previously done pharmacy inspections in Florida estimated that 1 in 50 pharmacists actually prescribed drugs. The officials’ reasons for the lack of prescribing mirrored those given by the pharmacists themselves. The first involved the drugs in the formulary. There is a belief that the drug categories available to the pharmacists and the specific drugs in them are not very useful because some OTC products work just as well. Therefore, there is no incentive for a pharmacist to use one of the drugs in the formulary to treat patients. The second explanation involved the liability issue. Individual pharmacists were concerned that they would increase their liability risk if they prescribed. Insurance companies did not want to insure individuals who prescribed drugs. The policies of some pharmacists who prescribed were canceled while others had riders attached. At one point, there was an insurance surcharge if a pharmacist wanted to prescribe. The third common reason given for pharmacists’ not prescribing was the presence of time constraints. As shown in appendix V, a number of recordkeeping requirements are associated with prescribing a drug. They take time (one official estimated 10 minutes per prescription). One official tied the recordkeeping requirements to the liability issue, noting that the paperwork involved with prescribing brings pharmacists into the spotlight and makes them more fearful of liability. In chapter 4, we discussed the practice of pharmacy in the study countries, including reports on pharmacists’ counseling on nonprescription drugs. The experiences in Florida are generally similar to those in the other countries. For example, Florida is similar to Australia—the one country where pharmacists are ever required to maintain patient profiles on nonprescription drug use—in that pharmacists often did not maintain the required profiles. Recordkeeping requirements were seen in both places as being excessive. In Florida, this was attributed to the requirements taking too much time, while in Australia the requirements were viewed as covering too many drugs. Similarly, in counseling their patients, pharmacists in other countries and Florida did not gather sufficient information from them on such items as medical conditions and other medications being taken. In many cases, counseling was more incomplete than inappropriate. Consumers’ views toward pharmacist counseling were also quite similar. Customers in Florida were generally positive toward pharmacists’ counseling, but they were less willing to pay for advice from pharmacists if only a nonprescription drug was involved. A study in Canada also found that most customers did not want advice on nonprescription drugs. While pharmacists in Florida have been given independent (although limited) prescribing authority, some pharmacists elsewhere in the United States have been given dependent prescribing authority. Typically, the pharmacists are constrained by protocols established by supervisory physicians. Dependent prescribing has not normally been discussed in terms of an intermediate class of drugs, but it does indicate roles that pharmacists have played in addition to the traditional one of dispensing medications. Because these activities are outside the scope of this report, we do not evaluate them here. Instead, we only describe alternative roles that pharmacists sometime have in the United States. The Indian Health Service (IHS), part of the U.S. Public Health Service in the Department of Health and Human Services, provides health services to American Indians and Alaskan Natives, including hospital and ambulatory medical care. IHS pharmacists are authorized to provide certain prescription drugs directly to patients without a physician’s authorization. At the outset of the program, the pharmacists could modify doses, dosage forms, and quantities of medicines and make therapeutic substitution of medicines. Later, their responsibilities were expanded to include treating minor acute illness and monitoring patients receiving chronic drug therapy between physician visits. The activities of pharmacists are defined by approved protocols that indicate their functions, responsibilities, and prescribing privileges. The protocols are organized by disease and include such elements as the criteria for inclusion in pharmacy-based care, specific definitions of the role of the referring physician or nurse and the pharmacist, criteria for periodic visits by physicians to review a patient’s status and the quality of care the pharmacist delivers, and drug therapy. In March 1995, the Department of Veterans Affairs (VA) issued a directive establishing medication prescribing authority for, among others, clinical pharmacy specialists. The directive defines inpatient and outpatient prescribing authority for clinical pharmacy specialists and other professionals, it lists the requirements for pharmacists to be given prescription authority, and it notes that each professional given the prescription authority will be limited by “a locally-determined scope of practice” that indicates his or her authority. Prescriptions written within the scope of practice do not require a physician’s signature, but those outside the scope of practice do. Nine states have established dependent prescribing privileges for pharmacists. In California, Nevada, and North Dakota, pharmacists are allowed to prescribe only in institutional settings; there are no such restrictions in the six other states. Only in New Mexico is special training required for pharmacists to prescribe. In these nine states, prescribing is done by a protocol that involves a voluntary agreement between the pharmacist and the physician. The pharmacist is responsible for initiating, monitoring, and modifying drug therapy while the physician supervises the process and overall patient care. For example, in Washington, all practicing pharmacists are eligible to initiate and modify drug therapy by protocol, but a written agreement must be developed between the pharmacist and an authorized prescriber. The agreement must be sent to the Washington State Board of Pharmacy for review. It must include, among other items, the type of prescribing authority to be exercised (including types of medical conditions and drugs or drug categories), documentation of prescriptive activities to be performed, and a mechanism for communicating with the authorizing practitioner. North Dakota recently gave pharmacists the right to prescribe but only in institutional settings (a hospital, skilled nursing facility, or swing bed facility) in which a patient’s medical records are readily available to the physician. Following diagnosis and initial patient assessment by a licensed physician, pharmacists in these settings (under the supervision of the same licensed physician) can initiate or modify drug therapy. The purpose of this report was to examine the structure and operation of drug distribution systems in other countries in order to better understand the potential advantages and disadvantages of establishing an intermediate class of drugs in the United States. The assumption is that while the experiences of other countries might not be models for the United States, they might provide a useful starting point for discussion. This chapter summarizes our findings and presents our conclusions. The two-tier U.S. drug distribution system with its prescription and general sale classes is unique among the 10 countries we studied. These countries restrict the sale of at least some nonprescription drugs to pharmacies or personal sale by a pharmacist. However, their drug distribution systems differ, and no efforts have been made to study systematically the consequences of the different systems. We found no systematic evidence to support the superiority of one drug distribution system over another. It is unclear how some of the benefits of a transition class would be realized in the United States. The experiences of other countries cannot be used to assess its usefulness because their intermediate classes are not used in this manner. Instead, they are generally viewed as fixed classes into which drugs are placed permanently. The intermediate classes are used solely to restrict access to drugs, not to facilitate their movement to general sale. It is unclear whether a transition class could be effective in monitoring adverse drug reactions while a drug is being considered for general sale. Several officials, questioning the usefulness of the data that would be collected, argued that toxicity profiles are well established through clinical research and experience with drugs as prescription products. Additionally, the data that would be collected when a drug was in the transition class would not be from well-controlled studies. The conclusions that could be drawn from the data would not be as well supported as conclusions from other types of studies. If an intermediate class were used to increase knowledge to better assess drugs for switching, pharmacists would have to keep records on patients’ drug purchases. This would allow the purchase of a drug to be linked with adverse outcomes. Pharmacists would have to record symptoms, other medical conditions, the practitioners who recommended the product, and the amount purchased. They would also have to follow up, recording experiences with a product such as efficacy, side effects, and interactions with food, drugs, and medical conditions. These recordkeeping requirements would take time and add costs; much less demanding recordkeeping requirements deter pharmacists in Florida from prescribing such drugs. Similarly, in the Australian state of Victoria, we found that pharmacists often did not maintain records of patients’ use of pharmacist-class drugs, despite being required to do so. Officials in the United States and abroad thought that an intermediate class, whether fixed or transition, would do little to prevent drug abuse. While having to buy drugs in pharmacies rather than in other outlets would be a deterrent (for instance, a consumer would have to talk to the pharmacist or would be able to buy only a small amount of the drug), this safeguard would be relatively easy to circumvent. Consumers could visit the same pharmacy on numerous occasions or go to several pharmacies to purchase the drug. Experiences in Australia and Germany in which pharmacist-controlled nonprescription drugs were either used or purchased improperly are consistent with these conclusions. All 10 countries control the prices of prescription drugs but not necessarily nonprescription products. Consequently, we could not draw useful lessons for the United States (where neither prescription nor nonprescription prices are controlled) on how prices change when a drug is switched. We did find some evidence from the United States and the United Kingdom that the price of a drug decreases when it is switched from prescription to nonprescription status. However, the effect on price of the presence or absence of an intermediate drug class has not been assessed. We also found that moving a drug to nonprescription status did not necessarily reduce health care costs. An incentive is created to obtain a drug with a prescription when such drugs remain reimbursable if they are prescribed but not if bought without a prescription. This can occur if patients have less out-of-pocket costs (for instance, because of a small copayment) for a prescription drug than for a nonprescription product, even if the nonprescription product is less expensive. The European Union has decided not to require that the member countries establish a particular drug distribution system. The European Union was not convinced of the superiority of any particular system. Each member country will be allowed to establish whatever drug distribution it wants, provided the requirements for domestic producers and importers are the same. The European Union has established criteria for distinguishing prescription from nonprescription drugs in the hope that drugs in these categories will become consistent from country to country. There are approximately 54,000 community pharmacies in the United States. This is substantially less per capita than 6 of the countries studied (if drugstores are included), while 2 other countries and the Canadian province of Ontario have approximately the same number as the United States. Only Denmark and Sweden have many fewer community pharmacies per capita than the United States. This suggests that limiting the sale of some nonprescription drugs to pharmacies in the United States would create somewhat greater access problems than in 6 of the countries. However, this is complicated by the number of other outlets such as mail-order and managed care pharmacies that might choose to sell these drugs. If such outlets chose to sell these products, the reduced access to these drugs from limiting them to sale in pharmacies could be offset. While the number of pharmacies gives some indication of access, the distance to a pharmacy is also very important. The distance that people live from pharmacies varies greatly in the United States. The nearest pharmacy can be 100 or more miles away. Restricting the sale of some nonprescription drugs to pharmacies would give individuals who have ready access to a pharmacy a greater number of nonprescription drugs from which to choose. However, if the drugs were to come from the prescription class, relative access between customers with and without ready access to a pharmacy would remain the same. The drugs would still be available for sale only in pharmacies; the difference would be that a prescription would not be required. Of the countries studied, only the United States allows self-selection of all nonprescription drugs. Denmark, France, and Italy do not allow self-service for any drugs, while the remaining countries allow it for some but not all nonprescription products. If the United States were to establish an intermediate class of drugs (whether fixed or transition), it might not allow the self-selection of these products, since the theoretical benefits associated with the class would be difficult to achieve without some control on their distribution in pharmacies. This could change the way nonprescription drug purchases are made, since comparisons between products would be more difficult for consumers to make, not being able to select intermediate-class products from the shelf personally. Our examination of the classification of 14 selected drugs in the study countries indicated no clear pattern of increased nonprescription drug availability because of the existence of a pharmacist or pharmacy class. It appears that other factors in addition to or instead of the existence of a pharmacist or pharmacy class account for differences in drug classification between the countries. Despite the absence of an intermediate class, the United States allows the sale of some of the 14 drugs without a prescription that many other countries restrict to prescription sale. Conversely, the United States restricts to prescription sale some drugs that other countries allow to be sold without a prescription but only in a pharmacist or pharmacy class. It also appears that access in one country relative to another depends in part on how access is defined. More of the 14 drugs were available for sale outside pharmacies in the United States than in any of the other countries. However, the United States restricts the sale of more of these drugs to prescription status than do 5 of the countries. These drugs, while available for sale without a prescription, are restricted to a pharmacist class. Thus, if the criterion used for defining access is the number of drugs available for general sale, the United States has the most accessible system. However, if the criterion is the number of drugs available without a prescription, the United States is somewhere in the middle in terms of accessibility. Officials in the countries we visited and the literature on pharmacist counseling generally agree that the theory of pharmacy practice diverges from the reality. The theory of pharmacy practice involves (and the success of a fixed-intermediate or transition class requires), for example, the complete and appropriate counseling of patients on such issues as dosing instructions and potential adverse drug reactions, as well as maintaining patient profiles. However, pharmacists have often not performed these roles (especially for nonprescription drugs), either in the United States or abroad, even when doing so is expected and, in some cases, required. Pharmacist counseling, as practiced, is less frequent and less thorough than desired, although it has improved over time. In efforts in the United States and elsewhere to increase the role of pharmacists, professional associations and academics are advocating the idea of “pharmaceutical care,” with its emphasis on monitoring a patient’s drug therapy rather than on dispensing the drugs. There is evidence that in institutional settings such as hospitals, there are benefits from pharmaceutical care. However, pharmaceutical care is only now being implemented in community pharmacies and its value has yet to be established. Improved drug use is often cited as a justification for an intermediate drug class, and evidence for it gives support for expanding the role of pharmacists in general. Such an expansion does not necessitate creating an additional drug class. Indeed, the current system would benefit from an improvement in pharmacist counseling. The Florida Pharmacist Self-Care Consultant Law has had very little effect on the practice of pharmacy. Pharmacists rarely prescribe drugs in the formulary. This is attributed to (1) drugs being available without a prescription that are just as effective as the ones in the formulary, (2) the perception of increased liability, and (3) burdensome recordkeeping requirements. Other countries’ and Florida’s experiences do not support a fundamental change in the drug distribution system of the United States such as creating an intermediate class of drugs, whether fixed or transition, at this time. Its benefits are unclear. No evidence at this time shows the overall superiority of a drug distribution system that restricts the sale of at least some nonprescription drugs to pharmacies. However, it should also be clear that there is no evidence that systems that do this are necessarily inferior to drug distribution systems that allow some or all nonprescription drugs to be sold outside pharmacies. The evidence that does exist tends to undermine the contention that major benefits are being obtained in countries with a pharmacist or pharmacy class. Such a class is not being used to facilitate the movement of drugs to sale outside pharmacies. Also, pharmacist counseling as it is currently practiced does not support the goals of either a fixed or a transition class. Pharmacists are not regularly counseling patients, maintaining patient profiles, or monitoring for adverse drug effects. Thus, there is no evidence to show that the role that U.S. pharmacists would have to play to support the appropriate use of an intermediate class of drugs (either fixed or transition) would be fulfilled reliably and effectively. The evidence indicates that at this time major improvements in nonprescription drug use are unlikely to result from restricting the sale of some OTCs to pharmacies or by pharmacists, nor are the safeguards for pharmacy- or pharmacist-class drugs that would have otherwise remained in the prescription class likely to be sufficient.
Pursuant to a congressional request, GAO reviewed the creation of a drug class that would be available only through pharmacies, but would not require a physician's prescription, focusing on: (1) studies and reports on the development, operation, and consequences of different drug distribution systems; (2) the drug distribution systems in 10 selected countries; (3) how access to nonprescription drugs varies between the selected countries and the United States; (4) how pharmacists ensure the proper use of nonprescription drugs; and (5) the U.S. experience with pharmacists dispensing drugs without a prescription. GAO found that: (1) available evidence shows that there are no major benefits from establishing a class of pharmacist-controlled nonprescription drugs; (2) studies have not attempted to link different drug distribution systems with differences between the countries' health care costs, adverse drug reactions, and quality of care; (3) the two-tier system in the United States is unique, since all other countries have at least one intermediate class of drugs; (4) although all 10 countries restrict some or all sales of nonprescription drugs, they do not use the pharmacy or pharmacist drug class to assess the drugs' suitability for sale outside of pharmacies; (5) the European Union has decided not to impose any particular drug distribution system on its members, since no system has proved to be superior; (6) there is no clear pattern of increased or decreased access to nonprescription drugs where an intermediate class of drugs exists; (7) the countries' safeguards to prevent drug misuse and abuse are easily circumvented and pharmacist counseling is infrequent and incomplete; (8) pharmacists are rarely required to keep records on drug use and none are required to report adverse reactions; and (9) Florida's unsuccessful experience with a similar class of drugs was due to pharmacists' failure to regularly prescribe these drugs, give patients adequate counseling, or follow recordkeeping requirements.
DOD’s combat casualty care researchers focus their efforts on the major causes of injury and death on the battlefield, and on improving medical care in specific battlefield conditions. For example, DOD estimates that approximately 84 percent of potentially survivable battlefield deaths are caused by bleeding. Therefore, DOD focuses a significant amount of its research on ways to control bleeding on the battlefield. Other areas on which DOD researchers focus include extremity trauma, diagnosis and treatment of traumatic brain injury, and ways to improve the care provided to casualties prior to and during evacuation to a hospital. In order to improve medical care in these areas, DOD researchers use various means to apply findings from combat casualty care research to develop drugs or medical devices. For example, DOD researchers convene multidisciplinary teams to decide whether a research project is ready and feasible to support development of a drug or medical device, according to DOD officials. These teams consist of researchers and other DOD personnel who are involved in acquiring and maintaining drugs and medical devices. At multiple meetings, the teams make decisions on whether to allow the project to proceed. In addition, DOD researchers work with the FDA to understand and share general information about regulatory requirements for drugs and medical devices that DOD develops. DOD officials also told us that in some cases DOD researchers also share the results of DOD research with medical corporations, which develop these products. In addition to developing drugs or medical devices, DOD researchers apply findings from combat casualty care research by disseminating information on medical practices. For example, the Army Institute for Surgical Research publishes clinical-practice guidelines that clinical subject-matter experts develop in response to needs identified while providing care to combat casualties. These guidelines are based on the best existing clinical evidence and experience, approved by senior DOD medical officials, and are available to all military medical practitioners. In addition, DOD researchers share new medical knowledge and best- practice information by publishing research results in medical journals and making presentations at conferences. In May 2008, then–Secretary of Defense Robert Gates publicly expressed his commitment to improving medical care and support for wounded servicemembers. In that same month, DOD completed a program assessment of its medical research and development investments, which became the basis for DOD’s June 2008 Guidance for the Development of the Force report. Among other matters, this assessment identified gaps in DOD’s capabilities to protect the health of servicemembers, including health care provided to servicemembers who are wounded on the battlefield. For example, the 2008 report identified a gap in DOD’s capability to diagnose, resuscitate, and stabilize casualties with survivable wounds. DOD used the capability gaps identified in the 2008 report as the justification for funding requests that DOD subsequently made for medical research and development, including for research to address gaps in DOD’s capability to provide combat casualty care. This assessment also concluded that a consolidated medical research and development budget structure with a centralized planning, programming, and budget authority and with centralized management would provide the most efficient and effective process and governance for DOD’s medical research and development investment. To address the gaps in its capability to provide combat casualty care, DOD has increased this research funding overall, as shown in figure 1. In fiscal year 2010, DOD’s funding for combat casualty care research increased to $537 million, and 2 years later it fell to $321 million. Health Affairs and the Army, with 82 percent of the funding in fiscal year 2012, were responsible for the majority of this research (see fig. 2). The Navy, the Air Force, and DARPA were responsible for the remainder. Multiple officials and organizations oversee DOD’s combat casualty care research and development. The Assistant Secretary of Defense for Research and Engineering—who reports to the Under Secretary of Defense for Acquisition, Technology and Logistics—is responsible for promoting coordination of all research and engineering within DOD, including health-related research such as combat casualty care research. In addition, the Assistant Secretary of Defense for Health Affairs serves as the principal advisor to the Under Secretary of Defense for Personnel and Readiness on a variety of health issues, including medical research, which includes research to improve combat casualty care. The Assistant Secretary of Defense for Research and Engineering and the Assistant Secretary of Defense for Health Affairs cochair the Armed Services Biomedical Research and Evaluation Management committee. This committee’s charter states that it was established to facilitate coordination and prevent unnecessary duplication of effort within DOD’s biomedical research and development program. Joint Technology Coordinating support the committee in specific research areas, including Groupscombat casualty care. Joint Technology Coordinating Groups are responsible for coordinating plans for research in their areas and for submitting recommendations on the distribution of responsibility for program execution and resources. (See fig. 3 for organizations that oversee combat casualty care research and development.) With regard to planning, there are multiple DOD organizations specifically devoted to biomedical research, and these organizations plan research and development designed to improve the medical care provided to injured servicemembers. They include the Army MRMC, the Office of Naval Research, the Naval Medical Research Center, the Air Force Office of Scientific Research, the Air Force Medical Support Agency, and DARPA. In March 2011, Health Affairs signed an interagency support agreement with the Army MRMC to take advantage of existing Army MRMC staff and infrastructure. Under the agreement, the Army MRMC manages certain Health Affairs funds for medical research and development. To help manage these funds, the Army MRMC established Joint Program Committees for the major areas of medical research that DOD conducts, including combat casualty care, which is managed by the Joint Program Committee for Combat Casualty Care (JPC-6). The JPC-6 includes representatives from the DOD biomedical research organizations within each military department, including the Marine Corps, as well as from DARPA, NIH, VA, and other DOD organizations that use the results of combat casualty care research—such as DOD’s Special Operations Command. These organizations coordinate to prioritize how to spend the Health Affairs funding for combat casualty care research. Other DOD research organizations also conduct research that is at times related to combat casualty care. Typically these research organizations do not plan or conduct biomedical research, but sometimes they identify ways that applications of their research could improve combat casualty care. These organizations include the Army Research Laboratory and the Naval Postgraduate School. DOD’s biomedical research organizations use a coordinated approach to plan for combat casualty care research and development in a manner that is consistent with key collaboration practices. Further, DOD research organizations do not always share information early in the research process. DOD has also taken steps to coordinate with other federal agencies that are involved in combat casualty care research. DOD’s biomedical research organizations coordinate combat casualty care research and development planning in a manner that is consistent with key collaboration practices identified in prior GAO work to enhance and sustain coordination. These key practices include agreeing on roles and responsibilities and establishing a means to operate across organizational boundaries. DOD’s biomedical research organizations responsible for combat casualty care research and development have agreed on their roles and responsibilities, including establishing a key leadership position responsible for combat casualty care research. As we have previously reported, agreement on roles and responsibilities among coordinating organizations is important because it enables each organization to stay informed about the others’ individual and joint efforts, and it facilitates decision making. DOD’s biomedical research organizations have agreed on the roles and responsibilities for the organizations involved in planning, overseeing, and executing this type of research. First, Health Affairs and the Army MRMC—the two organizations that fund most combat casualty care research and development—have outlined their roles in an Interagency Support Agreement, which designates the Army MRMC as the organization responsible for managing the day-to-day use of Health Affairs funding for medical research, including research to improve combat casualty care. Second, the JPC-6 developed a draft charter in 2010 that explains the roles and responsibilities for all of the JPC-6 member organizations, including the non-DOD organizations, such as VA and NIH. The draft charter was finalized in early January 2013, while we were conducting our review.told us that the JPC-6 began using the charter in 2010, but that they delayed finalizing it in part because they wanted to have the opportunity to incorporate lessons learned during the operation of the committee during its first 2 years. The charter states that JPC-6 members represent the interests of their member organizations as well as provide subject- matter expertise and advice to the JPC-6 chair on requirements, program Health Affairs and Army MRMC officials management, transition planning, and planning and programming for future investments. In addition to establishing a JPC-6 charter, Health Affairs and Army MRMC have established a key leadership position responsible for combat casualty care research by having one official serve simultaneously in three complementary roles: JPC-6 chair, Director of the Army Combat Casualty Care Research Program, and chair of the Joint Technology Coordinating Group for Combat Casualty Care. As noted in the JPC-6 charter, the group’s chair is responsible for making recommendations to Health Affairs for planning, programming, budgeting, and executing research and development to improve medical care provided to combat casualties, and the chair is to make these recommendations with the advice and support of the JPC-6 members. Because the DOD official serving as JPC-6 chair also serves as Director of the Army Combat Casualty Care Research Program and chair of the Joint Technology Coordinating Group for Combat Casualty Care, this official oversees the majority of this research in DOD. From fiscal years 2008 through 2011, this official oversaw approximately 600 research projects, constituting over 80 percent of DOD’s funding for combat casualty care research. Health Affairs and Army MRMC officials told us they expect that one official will lead all three organizations in the future. DOD’s biomedical research organizations responsible for combat casualty care research and development have established mechanisms to facilitate working across organizational boundaries—a step that, as we have previously reported, helps to enhance and sustain coordination. For example, DOD located nearly all of the DOD biomedical research organizations that conduct combat casualty care research at the Joint Center of Excellence for Battlefield Health and Trauma Research at Fort Sam Houston, Texas. The center includes the U.S. Army Institute for Surgical Research and other principal DOD biomedical research organizations that conduct combat casualty care research, such as the combat casualty care research functions from the Naval Medical Research Center and from Walter Reed Army Institute of Research. DOD officials told us that being located in the same place is useful in enabling them to know what other DOD organizations are doing with their related research and development. Another example of a mechanism to facilitate working across organizational boundaries is the Military Health System Research Symposium, an annual conference that provides DOD researchers the opportunity to discuss and address multiple medical research topics, including combat casualty care, with researchers from other federal agencies, academia, and private industry. DOD officials told us that these annual conferences have led to interagency collaboration on research and development for combat casualty care. DOD organizations that typically do not conduct biomedical research are generally not involved in DOD’s efforts to coordinate combat casualty care research. When these nonmedical research organizations conduct research relevant to combat casualty care, they do not always share relevant information with appropriate officials early in the research process. We have previously reported that organizations involved in similar missions should coordinate and share relevant information early to avoid unnecessary duplication of work. The JPC-6 chair, who is the lead official responsible for coordinating combat casualty care research, told us that he periodically has identified cases in which researchers began conducting research relevant to combat casualty care, but did not coordinate with him early in the process. He stated that in these cases, the research typically had been underway for a period of 1 to 5 years before he learned about it. He stated that he coordinates with nonmedical research organizations when he becomes aware of research relevant to combat casualty care. However, he stated that he has not always been aware of relevant research, and that there may be similar ongoing research projects about which he is currently unaware. For example, the Army Research Laboratory, which typically conducts research in the physical, engineering, and environmental sciences, started developing a product in 2006 that had the potential to control the bleeding of wounded soldiers— the leading cause of preventable deaths on the battlefield—but did not inform the JPC-6 chair of this research until 2 years later. In addition, multiple DOD officials—including the JPC-6 chair and other officials responsible for health research—stated that other DOD research organizations, such as the Naval Postgraduate School, the Defense Threat Reduction Agency, and the Joint Improvised Explosive Device Defeat Organization, have conducted research related to combat casualty care in the past and have not always coordinated or shared information early in the research process. The JPC-6 chair also stated that some DOD researchers do not share information with him early in the research process because they are not aware of the need to coordinate early and may not fully understand medical research requirements, such as those that are necessary to support FDA processes for approval of new drugs and medical devices. He also stated that a lack of awareness and understanding can result in researchers duplicating each other’s work. As discussed above, Army Research Laboratory researchers did not inform the JPC-6 chair of their work for 2 years, and as a result they learned that some of their initial testing did not fully adhere to medical testing protocols associated with wounds and wound severity. Subsequently, the researchers had to redo some steps in their research. An Army Research Laboratory official responsible for the project told us that they could have avoided the inefficiency of duplicating these steps if they had shared information with the JPC-6 chair at an earlier point. The JPC-6 chair stated that, since this occurrence, the Army Research Laboratory and Army MRMC now coordinate with one another regularly to identify Army Research Laboratory projects with potential implications for combat casualty care. DOD coordinates medical research information with other federal agencies, including FDA, NIH, and VA. DOD coordinates with FDA with regard to drugs and medical devices it develops because FDA is responsible for overseeing the safety and effectiveness of these products—including those that are developed through DOD’s combat casualty care research—and DOD must obtain FDA’s regulatory review and approval or clearance to field medical products. FDA officials stated that they regularly meet with the commanding general of the Army MRMC to review DOD’s medical research priorities and to share general information about regulatory requirements. FDA officials also provide product-specific advice to DOD regarding regulatory requirements by meeting with DOD researchers throughout the development process. This coordination is consistent with FDA’s efforts, noted in previous GAO reports, to address concerns from industry and advocacy groups, including those related to the timeliness of the review process and the need to improve communication between FDA and stakeholders throughout the development process. DOD officials told us that FDA regulators were very responsive to their regulatory questions and concerns, and they reported that sometimes this communication helped to expedite the development process. Likewise, it is important for DOD, NIH, and VA to coordinate with each other because all of these agencies conduct research that is directly related to combat casualty care research. DOD, NIH, and VA conduct joint program reviews, prepare joint strategic documents, complete joint research projects, and attend joint symposiums and conferences to share their research. Our prior work identified some issues concerning the ability of DOD, NIH, and VA to readily access comprehensive medical research information funded by the other agencies.three agencies could improve their ability to efficiently identify potential duplication if they improved access to each others’ comprehensive electronic information on funded health research. DOD officials recently stated that DOD and the other two agencies are working together to address these concerns. Specifically, NIH has provided a DOD official with access to an NIH database that contains information about funded health research projects, and it has also provided training and support so that the DOD official can search the database for potential duplicated research. If this effort is successful, DOD plans to identify additional medical research officials who will be granted access to NIH’s health research database. Because VA’s medical research information resides We found that the in this database, DOD will also be able to identify VA research that is directly related to DOD’s combat casualty care research. Health Affairs and Army MRMC monitor and assess the progress of combat casualty care research and development projects, but they have not assessed the extent to which this research fills gaps in DOD’s capability to provide combat casualty care or achieves other goals for this research, including those related to improving DOD’s ability to control bleeding, which is the primary cause of death on the battlefield. Internal control standards for the federal government state that agencies should monitor and assess their performance over time to help ensure that they meet the agency’s missions, goals, and objectives. Using performance information such as performance metrics can aid agencies with monitoring results, developing approaches to improve results, and helping determine progress in meeting the goals of programs or operations. Health Affairs and Army MRMC monitor and assess the progress of combat casualty care research and development projects. For example, Health Affairs and Army MRMC monitor and assess cost, schedule, and performance metrics for individual research projects to determine whether to continue funding, make necessary corrections to, or terminate these projects. Senior leadership in these organizations reviews projects annually to determine whether they are meeting established cost, schedule, and performance baselines. In addition, these leaders assess technology readiness levels—which are measurements of maturity level—to determine whether findings from a research project are sufficiently mature to move to the next phase of development. Health Affairs and Army MRMC also monitor and assess some aspects of the progress of the overall combat casualty care research portfolio, such as the number of projects completed, ongoing, or canceled, as well as the number of products available to users in the field. These organizations have applied findings from combat casualty care research to field five such products between fiscal years 2008 and 2011. For example, Health Affairs and Army MRMC officials told us that DOD fielded a combat gauze product that was the result of combat casualty care research. This gauze includes a mineral to help form blood clots and is designed to stop severe bleeding in less than 4 minutes. Following the annual combat casualty care research portfolio review in September 2012, Health Affairs and Army MRMC reported that they plan to identify new performance metrics, such as data related to peer-reviewed publications and FDA approved drugs and medical devices that will provide additional information on the overall portfolio’s progress. However, Health Affairs and Army MRMC have not assessed the extent to which the results of combat casualty care research fill gaps in DOD’s capability to provide care to combat casualties. As we discussed earlier, DOD identified a number of gaps in its capability to provide combat casualty care in the 2008 Guidance for the Development of the Force analysis and report. Since 2008, Health Affairs and Army MRMC told us that they have completed about 44 combat casualty care research projects that are each designed to address one or more of these capability gaps. Health Affairs and Army MRMC officials told us that in 2010 they attempted to measure the extent to which the 2008 capability gaps had been filled on the basis of the research results. However, they abandoned that effort because, according to officials, in 2010 researchers had not completed a sufficient amount of research designed to fill the 2008 capability gaps. In addition, these officials indicated that the capability gaps were not specific, were not organized to correspond with DOD’s research areas, and did not reflect the state of medical knowledge at the time. Health Affairs officials told us that they are currently revising these capability gaps and they expect to complete the revision in 2013. Following the Health Affairs revision, the Joint Staff—a group of senior military leaders in DOD—will then validate the capability gaps. Health Affairs and Army MRMC officials told us that they plan to assess whether the results of future research fill the revised capability gaps once the Joint Staff validates them. In addition, Health Affairs and Army MRMC have not developed an assessment of the extent to which the results of combat casualty care research have achieved other goals for this research. Both Health Affairs and Army MRMC have established goals for the combat casualty care research portfolio including several related to improving DOD’s ability to control bleeding, which is the primary cause of death on the battlefield. For example, Health Affairs set a goal for DOD to improve its ability to control bleeding in areas of the human body where it is not feasible to apply a tourniquet, such as on internal organs or the groin. Health Affairs and Army MRMC officials told us that they periodically review and discuss progress toward these research goals for certain research topics. However, these officials have not developed an assessment that comprehensively identifies each of the goals for the portfolio and includes information about the extent to which each goal has been met. They acknowledged that more work is needed to do this. Following a review and analysis of the combat casualty care research portfolio in September 2012, Health Affairs and Army MRMC officials reported to us that they intended to complete an overarching strategic roadmap for the portfolio by March 2013. They told us that they expect the roadmap could include specific project timelines and goals, among other things. However, on the basis of the information provided by DOD officials, we were unable to determine if the plan will clearly delineate how Health Affairs and Army MRMC will assess the extent to which results from combat casualty care projects fill capability gaps and achieve other goals. Until Health Affairs and Army MRMC assess the results of DOD’s research against revised capability gaps and other goals, DOD will not have reasonable assurance that the research it is conducting meets its needs. Coordination among the various organizations that plan and conduct combat casualty care research and development is important to effectively produce medical solutions to save or improve the lives of injured servicemembers. DOD has taken important steps to agree on roles and responsibilities and to establish the means for coordination and collaboration across organizational boundaries. However, DOD’s research organizations can only coordinate with each other when they become aware of relevant research. Without communicating to nonmedical research organizations about the importance of coordinating with the JPC-6 chair early in the research process, DOD research organizations may have to redo some steps of their research to address medical research requirements that they may not fully understand. Moreover, while DOD assesses the progress of combat casualty care research projects, it is also important that DOD monitor and assess the extent to which the results of its combat casualty care research fill the gaps in DOD’s capability to provide combat casualty care and achieve other goals that it established for the research. However, without a plan for monitoring and assessment, DOD runs the risk that it may not be producing results that most effectively improve combat casualty care to save lives on the battlefield. 1. To ensure that nonmedical DOD research organizations coordinate with the Assistant Secretary of Defense for Health Affairs early in the research process to understand medical research requirements and avoid inefficiencies that may lead to duplicative work, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology and Logistics to communicate to DOD’s nonmedical research organizations the importance of coordination with the JPC-6 chair on combat casualty care issues, and require this coordination early in the research process when these organizations conduct research with implications for combat casualty care. 2. To improve DOD’s ability to assess the overall performance of its combat casualty care research portfolio, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to direct the Assistant Secretary of Defense for Health Affairs to develop and implement a plan to assess the extent to which combat casualty care research and development fills gaps in DOD’s capability to provide combat casualty care and achieves DOD’s other goals for this portfolio of research. We provided a draft of this report to DOD, VA, and the Department of Health and Human Services (HHS), which includes FDA and NIH. In response, we received written comments from DOD and HHS, which are reprinted in appendixes I and II, respectively. VA did not comment on this report. DOD and HHS also provided technical comments that we have incorporated as appropriate. In its written comments, DOD concurred with the recommendations we made to the department and also described steps it had taken or planned to take in response to our recommendations. Specifically, DOD concurred with our first recommendation to communicate to nonmedical research organizations the importance of coordination with the JPC-6 chair and require this coordination early in the research process. DOD also concurred with our second recommendation to develop and implement a plan to assess the extent to which combat casualty care research addresses DOD’s capability gaps and achieves its other goals. In its comments on our second recommendation, DOD stated that it planned to revise its process to better assess the extent to which each combat casualty care research project closes capability gaps. Moreover, when we sent our draft report to DOD for comment in December 2012, Health Affairs and Army MRMC had not yet finalized the JPC-6 charter. Therefore, we included a recommendation in our draft report that DOD issue the final charter. In early January 2013, after we sent the draft report to DOD, the commanding general of Army MRMC signed and issued the final JPC-6 charter. As a result, we did not include the recommendation to finalize the charter in our final report. In its written comments, HHS responded to a statement in the draft report that DOD, NIH, and VA could improve their ability to efficiently identify potentially duplicative research with improved access to each agency’s electronic health research information, as noted in a 2012 GAO report. HHS stated that DOD has access, to varying degrees, to NIH and VA medical research information. Consistent with our 2012 report, HHS stated that NIH and VA need access to DOD medical research information to reduce the risk of potentially duplicative research. HHS also stated that the agencies continue to evaluate the best approach to providing NIH and VA with access to DOD’s medical research information. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense, the Deputy Under Secretary of Defense for Personnel and Readiness; the Deputy Under Secretary of Defense for Acquisitions, Technology and Logistics; the Assistant Secretary of Defense for Health Affairs; the Secretaries of the Army, Navy, and Air Force and the Commandant of the Marine Corps; the Secretary of Health and Human Services; the Secretary of Veterans Affairs; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Linda Kohn at (202) 512-7114 or [email protected] or Brenda Farrell at (202) 512-3604 or [email protected]. Contact Points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. In addition to the contacts named above, Will Simerl, Assistant Director; Steve Boyles; La Sherri Bush; James P. Klein; Monica Perez-Nelson; Michael Pose; Mike Silver; Sarah Veale; and Cheryl Weissman made key contributions to this report.
DOD estimates that about 24 percent of servicemembers who die in combat could have survived if improved and more timely medical care could be made available. Because multiple DOD organizations conduct research to develop medical products and processes to improve combat casualty care, it is critical that these organizations coordinate their work. It is also important that agencies monitor and assess their performance to help achieve organizational goals, which for DOD include addressing gaps in its capability to provide combat casualty care. The National Defense Authorization Act for Fiscal Year 2012 directed GAO to review DOD’s combat casualty care research and development programs. This report assesses whether DOD (1) uses a coordinated approach to plan this research; and (2) monitors and assesses this research to determine the extent to which it fills capability gaps and achieves other goals. GAO reviewed DOD’s policies and documentation; interviewed officials from DOD and other federal agencies; and analyzed metrics DOD used to gauge the progress of its research. The biomedical research organizations of the Department of Defense (DOD) use a coordinated approach to plan combat casualty care research and development, but not all of DOD's nonmedical research organizations share information early in the research process. GAO has previously reported that federal agencies can enhance and sustain collaboration of efforts by using key practices, such as agreeing on roles and responsibilities and establishing the means to operate across organizational boundaries. In 2010, DOD established a planning committee to coordinate the efforts of organizations conducting combat casualty care research. The committee developed a draft charter in 2010 identifying members respective roles and responsibilities. DOD issued the final charter in early January 2013, while GAO was conducting its review. DOD also facilitated operation across organizational boundaries by colocating most of the organizations conducting combat casualty care research. However, DOD organizations that typically do not conduct biomedical research, such as the Army Research Laboratory, are not involved in DOD's efforts to coordinate this research. When these organizations conduct research relevant to combat casualty care they do not always share information with appropriate officials early in the research process, as they are not aware of the need to coordinate early and may not fully understand medical research requirements. As a result, some researchers have had to repeat some work to adhere to these requirements. DOD has also taken steps to coordinate with other federal agencies that are involved in this research. The Office of the Assistant Secretary of Defense for Health Affairs (Health Affairs) and the Army Medical Research and Materiel Command (MRMC) assess the progress of combat casualty care research and development projects, but they have not assessed the extent to which this research fills gaps in DOD's capability to provide this care or achieves other DOD goals. Federal internal control standards state that agencies should assess their performance to ensure they meet the agency's objectives. Health Affairs and Army MRMC--the two organizations that fund most combat casualty care research and development--monitor research projects to determine whether to continue funding, make necessary corrections, or terminate these projects. However, in 2008 DOD identified gaps in its capability to provide combat casualty care, and although Health Affairs and Army MRMC have completed 44 research projects since then designed to address these gaps, they have not assessed whether the results of this research fill the gaps identified in 2008. In addition, Health Affairs and Army MRMC established other goals for this research portfolio to improve combat casualty care. For example, in 2010, Health Affairs set goals to improve DOD's ability to control bleeding. However, neither organization has developed an assessment that comprehensively identifies each of the goals for the portfolio and includes information about the extent to which each goal has been met. Health Affairs and Army MRMC officials stated that they intend to complete a strategic roadmap for the portfolio, but GAO was unable to determine if the roadmap will include a plan for a comprehensive assessment of this portfolio. Without such a plan for a comprehensive assessment, these organizations cannot be sure the research they are conducting is producing results that most effectively improve combat casualty care to save lives on the battlefield. GAO recommends that DOD (1) communicate the importance of early coordination among DOD's nonmedical organizations and (2) develop and implement a plan to determine the extent to which research fills gaps and achieves other goals. DOD concurred with these recommendations.
The Army established the MWO program to enhance the capabilities of its fielded weapon systems and other equipment and correct any identified operational and safety problems. Modifications vary in size and complexity. For example, for a modification to the Bradley Fighting Vehicle, the Army is adding the driver’s thermal viewer to improve visibility during night-time and all-weather conditions, the battlefield combat identification system to reduce the potential for friendly fire casualties, and the global positioning receiver and digital compass system to improve navigation. In contrast to this major modification, the Army is adding updated seat belts to its fleet of High Mobility Multipurpose Wheeled Vehicles to improve safety. The Army is making a sizable investment to modify its fielded equipment. For fiscal years 1995-97, the Army received $5.1 billion for all of its modification programs, and the President has requested $6.7 billion for 208 modifications to the Army’s equipment for fiscal years 1998-2003. About 80 percent of that amount is for modifications to helicopters and other aviation items and to weapons and tracked combat vehicles. According to Army headquarters officials, as the Army’s budget has declined, less funding has been available for new systems. As a result, the Army will have to rely more heavily on the modification of its assets to correct deficiencies and enhance equipment capabilities. For example, to correct identified problems and add technological advances, the Army has approved 95 MWOs for its Apache helicopter since fielding this system in 1986. Management of the MWO program is shared by several Army headquarters organizations. Each organization has a wide range of decision-making responsibilities in developing and supporting weapon systems, which includes modifying weapon systems and equipment through the MWO program. The Army defined the roles and responsibilities of its headquarters organizations and MWO sponsors in its September 6, 1990, Interim Operating Instructions for Materiel Change Management, which superseded Army Regulation 750-10. One of the objectives cited in the instruction was to decentralize the management of each MWO and yet retain overall responsibility and oversight at the headquarters level. The instructions list numerous responsibilities for Army organizations; however, Army headquarters officials emphasized the following key duties for the organizations with primary responsibilities: The Deputy Chief of Staff for Operations has responsibility for prioritizing the required modifications for technical and safety issues, justifying and monitoring the overall budget, and allocating the approved funding. The Deputy Chief of Staff for Logistics has responsibility for overall supply and maintenance support and for knowing the status of MWOs. The Acquisition Executive has responsibility over modifications to correct or enhance the operations of weapon systems still being acquired. The Army Materiel Command has responsibility over modifications to correct or enhance the operations of weapon systems that are no longer being acquired and for other equipment items. In addition, the Army Materiel Command is executive agent for the headquarters and, as such, is responsible for knowing the status of MWOs and for ensuring that each MWO is complete and conforms with Army policy and procedures before the modification is done. Program sponsors for individual weapon systems and other equipment items are responsible for executing each MWO—acquiring the various components needed to modify the weapon systems and equipment, putting together the applicable MWO kit, ensuring logistical support items are addressed, and managing the modification process on a day-to-day basis. The MWO program sponsors for systems still being acquired are managed under the Program Executive Office of the Army Acquisition Executive, and the program sponsors for systems no longer being acquired are managed under the commodity commands of the Army Materiel Command. In January 1997, the Army formed a process action team, including representatives from the organizations with program management responsibility, to study how the program could be improved. The Army also hired a contractor to assist in evaluating how automated information might be used to support program management. We coordinated with the process action team and have provided the team with information as our evaluation progressed. The process action team expects to provide its recommendations to the Army by October 1997. The Army does not currently maintain centralized information to track the status of equipment modifications. Instead, it relies on the individual program sponsors to capture the information they need to track the separate modifications for which they are responsible. As a result, Army headquarters and Army Materiel Command officials do not have the information they need to effectively oversee this highly decentralized modification program. Moreover, the information that Army headquarters officials and maintenance personnel have for tracking modifications may not be entirely accurate. Finally, field and depot maintenance personnel do not have ready access to the information they need to determine current equipment configurations, nor do they have ready access to the technical information they need to maintain the equipment once it is modified. Individual program sponsors decide how they will track the modifications for which they are responsible. Our review showed a variety of ways that system modifications are tracked. As a general rule, for high-cost systems such as M1 tanks, Bradley Fighting Vehicles, and helicopters, the command or program sponsors established databases showing systems that were modified and systems that were not. However, for high-density, widely dispersed systems such as M113 armored personnel carriers, trucks, and radios, program sponsors make very little or no attempt to track which systems were modified. To carry out its management functions, the Army Materiel Command had previously developed an integrated database to track the status of MWO installation and funding. However, the Command quit using the system because the Army (1) discontinued funding to maintain the portion of the system used to track MWO installation and (2) canceled the remaining portion of the system because it was not chosen as a Department of Defense (DOD) standard system to track funding. As noted, a contractor is currently studying the automated data needs of the MWO program. The potential problems created by the lack of centralized information readily available to Army officials to track modifications were highlighted in a 1994 Army Audit Agency report. The report pointed out that the Army Materiel Command needed up-to-date equipment configuration information to satisfy requirements that pertain to readiness, safety, and compliance with laws. The report also noted that without a centralized information system, the Command’s current and future ability to plan for the sustainment of weapon systems was weakened. Furthermore, this could affect the Army’s current and future readiness position and adversely affect troop survivability. Army headquarters and Army Materiel Command officials responsible for formulating the MWO program budget and for ensuring that upgraded and enhanced equipment is available to satisfy the Army’s force structure have limited information about what MWO funds have been spent, what equipment has been modified, and what equipment still needs to be modified. Due to the decentralized nature of the program, the Army budgets for MWOs through each program sponsor, who has discretion in spending and transferring funds. While the data available from program sponsors provide some information, Army headquarters officials told us they do not have ready access to this information and that it is insufficient to enable them to track budget expenditures. As previously stated, not all program sponsors track the status of their MWOs. While the information for tracked systems provides some degree of control over the configuration, such information is not available for all weapon systems and equipment. Moreover, headquarters officials maintain that these individual tracking systems do not have all the information they need to make informed decisions and are not readily accessible. The lack of timely information on equipment configuration could have potential adverse effects. For example, if the Army deployed a mechanized infantry division, it would need to know the latest configuration of the division’s tanks, Bradley Fighting Vehicles, helicopters, and trucks for mission considerations as well as to ensure that the appropriate parts needed for maintenance were on hand. To determine the latest configuration of this equipment, Army officials would have to contact the respective systems’ program sponsors to determine how many tanks, Bradleys, and helicopters of each configuration there were in the division—a time-consuming process. In addition, civilian aviation and Army ground maintenance personnel at Fort Hood, Texas, and Fort Carson, Colorado, told us that the accuracy of the databases may be suspect. For example, they said that in some instances modified parts had been removed from aircraft such as the Huey utility helicopter and nonmodified parts had been reinstalled. This occurred because either the unit did so intentionally or no modified parts were in stock when the new parts broke. As a result, the configuration of these aircraft and ground equipment are not always accurately portrayed in the database used by the maintenance personnel, and Army headquarters officials would not know the current configuration for these aircraft or ground equipment. Without the latest and most accurate configuration information, it is difficult to ensure that deploying units have the latest, most enhanced, and most survivable equipment. Logistics support is also complicated because planners do not know which type of and how many spare parts are needed to support the unit. Depot maintenance personnel at the Anniston Army Depot, Alabama, told us they need current and accurate configuration data to overhaul equipment but that they do not have such data. To overhaul equipment, they need to know whether any modifications or components are missing. Lack of good configuration data makes it difficult to accurately estimate the costs of overhauls and to have the proper kits and repair parts on hand. Officials said that, as a result, they expend additional labor for physical inspections and make allowances in their cost estimates to cover unanticipated problems. For example, depot personnel had to visually inspect 32 National Guard trucks in the depot for overhaul because they had no way of knowing whether two authorized modifications had been made when the vehicles arrived. When this happens, the overhaul program is delayed while depot personnel order the parts or kits. However, if MWO kits are not installed at the time the modification is made to the fleet, the kits are often no longer available. Field and support organization personnel also told us they have trouble identifying what the configuration of weapon systems and equipment should be and whether modifications have been made. They told us they need to know whether the configuration of weapon systems and equipment is up-to-date and what is required on the item in order to maintain it. They said that this problem is especially acute for items that are transferred from other units. These officials said they had sometimes spent many hours inspecting equipment to determine its current configuration because determining whether modifications had been done was not easy. For example, during our visit to Fort Carson, Colorado, a maintenance chief said that all authorized modifications on two helicopters he had received from another geographic area were supposed to have been made, but in preparing them for deployment, a visual inspection showed some modifications had not been made. According to the chief, a contractor team had to make the necessary modifications before the aircraft could be deployed. No tracking information and no central list of modification changes that should have been made are available for equipment with lower dollar values, like trucks. According to field personnel, the only way to determine the configuration of weapon systems or equipment is to do a physical inventory and compare the results to similar items that are already assigned to the unit. Maintenance personnel at several locations said that an information system that tracks both the completion of MWOs and any removal or transfer of major components would be useful. However, they would rather have this capability added to their existing maintenance information system than have an entirely new information system to maintain and use. We were told this tracking information will become especially critical in the future as more modifications involve software revisions. Without tracking all of the MWO changes, removal or transfer of major components, and software revisions, the configuration data recorded in the information system will be inaccurate. Field and support organization personnel told us that they also need up-to-date technical information to maintain equipment. The Army’s interim guidance requires technical publications to be updated and distributed to field locations before modifications are made. However, maintenance personnel from Fort Hood, Texas, and Fort Campbell, Kentucky, told us that technical manual updates are published only on a yearly basis and that they do not receive updated technical publications in a timely manner. If the modification and resulting configuration change occur between updates, the unit may have to wait months before receiving the updated technical information. This delay not only prevents maintenance personnel from using the latest techniques to troubleshoot equipment but it may also result in wasted effort and impede supply personnel from ordering the correct repair parts. A division aviation maintenance officer at Fort Campbell cited several instances in which the lack of up-to-date technical manuals caused wasted effort or delayed the installation of the modification. For example, in July 1996, when division maintenance personnel modified the fuel subsystem on the Apache attack helicopter, they did not receive revisions to the supply parts manual. Subsequently, the aircraft was grounded and the maintenance team wasted many hours troubleshooting because the old manual did not identify the new fuel transfer valve. This new part would have been identified in the revised manual. In another instance, they had to delay the installation of the embedded global positioning system on the Apache by 2 weeks because the Apache program office did not provide changes to the maintenance test flight and operator manuals. The Army sometimes loses portions of its enhanced equipment capabilities achieved through equipment modifications because Army units cannot always obtain spare parts for its modified weapon systems and equipment. This occurs because program sponsors do not always order initial spare parts for the supply system when they procure MWO kits. Furthermore, they do not always modify the spare parts that are at the depot and unit level to the configuration of the new component. Army officials reviewing the MWO program believe that these problems occurred because Army regulations are not clear about whether program sponsors are supposed to provide initial spare parts when they acquire the MWO kits. As a result, Army units increase their efforts to keep equipment operational and ready. In addition, program sponsors and supply system personnel do not always follow policies and procedures to ensure that supply system records are updated to show the addition of new items and the deletion of replaced items. When the supply system records are inaccurate, the Army’s budget may not reflect accurate requirements for new spare parts to repair and maintain modified weapon systems and equipment. Some program sponsors have not used their limited funds to order initial spare parts for the supply system, according to Army officials responsible for the management of the MWO program. Ideally, initial spare parts would be provided to bridge the gap between the modification of equipment and the entrance of the replenishment spare parts into the Army’s supply system. Providing initial spare parts at the time of modification is needed because the supply system can take 18 to 24 months or more to provide replenishment spare parts, according to aviation supply representatives. According to Army civilian aviation maintenance personnel at Fort Hood and Army aviation and ground maintenance personnel at Fort Carson and Fort Campbell, program sponsors did not always modify spare parts at unit and depot locations when equipment was modified. For example, we were told that the Apache attack helicopters were being modified with an improved fuel subsystem, but at least four major components were not available in the depot supply system. As a result, aviation maintenance personnel had to take parts from five MWO kits intended for other aircraft. This MWO had been ongoing for 15 months. Aviation personnel said this occurred because at least some portion of the components stored at the depot had not been modified to the new configuration. One program sponsor told us his office was not required to buy initial spare parts or modify parts located at depots when they modified equipment in the field. However, the Army’s interim operating instructions require program sponsors to ensure all necessary integrated logistical support parts items are addressed. Furthermore, according to Army Regulation 700-18, ordering initial spare parts is part of the total integrated logistical support package for systems and end items. This regulation, which does not specifically refer to modifications, requires program sponsors to coordinate logistical support requirements with all agencies and activities concerned with initial materiel support for weapon systems and equipment. According to Army headquarters officials, both the interim guidance and the regulation require program sponsors to provide initial spare parts and to modify spare parts, but neither may be clear enough to ensure that all program sponsors do it for modifications. In addition, Army headquarters officials told us that when the Army Materiel Command used configuration control boards, comprised of technical and administrative representatives, to ensure the MWOs were complete and conformed with Army policies and procedures, the need to buy spare parts was part of the approval process. The Army Materiel Command lost this quality control when the reviews were decentralized to the program sponsors. Army personnel at the four locations we visited told us that they had to take additional measures to support their equipment because they had experienced problems obtaining spare parts. They stated that if spare parts were not available, they took components from MWO kits. For example, the only way to obtain spare parts for the new fuel control panels—part of the Apache attack helicopter fuel crossover modification—was to take them from kits that were needed to modify other Apache helicopters. In addition, they had obtained parts outside the normal supply system by fabricating parts locally and by buying parts directly from contractors with local funds. These activities have led to higher costs and reduced efficiencies at units we visited. In reviewing 73 MWO cases, we attempted to determine whether the Army had properly phased out old spare parts and added new items to its supply system to support newly modified equipment. Because the Army does not have an automated list of major components in MWOs, we encountered difficulties in trying to make this analysis and could not identify a significantly large number of the major components. We compared information on those major components that we could identify with the Army’s budget justification report and inventory records and found many irregularities. For example, national stock numbers had not been assigned for some components; some items with national stock numbers could not be tracked into the supply system; and relationship codes, which show whether old items are to be phased out of the supply system, were not always assigned. We were unable to measure the impact of these irregularities from our relatively small sample of MWOs; however, we believe that they indicate long-standing weaknesses in the Army’s management of spare parts. For example, using a larger universe, we reported on similar errors in the Army’s budget justification report in December 1995. In that report, we noted that the Army’s budget justification report for spare parts contained numerous errors, including errors in the relationship codes and inaccurate records for items being repaired at maintenance facilities. We reported that as a result of the errors, the Army lacks assurance that its budget requests represent its actual funding needs for spare parts. Field maintenance personnel cited numerous problems in modifying their weapon systems and equipment. For example, they stated that (1) the completion of multiple MWOs on the same piece of equipment is not always coordinated, or not all equipment is modified at the same time; (2) they do not always receive adequate notice of MWOs; and (3) modified equipment does not always work together with other equipment once the modification takes place. As a result, they believe some units are losing equipment capability or experiencing reduced reportable mission time, the cost to install MWOs is increasing, and the training of unit personnel may be adversely affected. Army headquarters and Army Materiel Command officials believe these problems are also occurring because of their loss of oversight and control over the program and the inconsistent implementation of policies and procedures by program sponsors, especially in negotiating fielding plans with the affected organizations. Maintenance personnel told us that the completion of multiple MWOs on the same equipment is not always coordinated. For example, the National Guard is testing a program to place some of its equipment in long-term preservation storage. Equipment in long-term storage testing at the Camp Shelby, Mississippi, mobilization and equipment training site has been taken out of storage several times so modifications can be made. As a result, the program was disrupted, and additional labor hours were expended, according to a National Guard official. The lack of coordination in the future could have even greater cost implications because the Guard is planning to place 25 percent of its equipment in preserved storage and if it implements recommendations we are making in another report, the Guard would put an even larger percentage in storage. In another example, an aviation maintenance chief told us that two labor-intensive modifications were planned for consecutive years on each of 33 Blackhawk utility aircraft belonging to two units at Fort Carson. He said that making both modifications concurrently made more sense. Since a modification causes an aircraft to be grounded, the additional downtime to install each modification consecutively would adversely affect the reportable mission time for each unit. Maintenance personnel also noted that inefficiencies had resulted when not all modifications were done at the same time. For example, when the Army upgraded the armament fire control system on the M1 tank at the Camp Shelby mobilization and training site, a contractor team installed new software cards in the fire control system and 2 months later, a team from the Anniston Army Depot made needed mechanical adjustments to the same tanks. According to Army officials, both functions could have been done at the same time, thereby reducing the time the unit was without its equipment. The direct support maintenance chiefs and general support maintenance personnel at Fort Hood and Fort Carson told us they did not always receive adequate notice of modifications. This situation disrupted their ability to meet training schedules that were set up 12 months in advance and interfered with their ability to maintain their equipment. After some modifications are done, some equipment does not always work together properly, according to aviation maintenance personnel at Fort Hood. For example, although civilian aviation personnel at Fort Hood modified the Blackhawk utility helicopters to work with night vision goggles, they could not get replacement radios from a different program sponsor that were compatible with the night vision goggle system, and night operational capability was lost. Army headquarters and Army Materiel Command officials believed these problems had occurred because of their loss of oversight and control over the program and the inconsistent implementation of policies and procedures by program sponsors. The Army’s Interim Operating Instructions for Materiel Change Management requires individual program sponsors to prepare a fielding plan for each modification. The fielding plan calls for coordination and adequate notice when a modification is to be done. The highly decentralized nature of the MWO program underscores the need for Army headquarters officials to have ready access to program data and information and adequate management controls to ensure that program implementation complies with policies and procedures. Even though the database they used was discontinued in part because it was not accepted as a standard DOD system, Army headquarters officials told us that the unavailability of information on the status of MWOs, the status of funding, and the configuration of weapon systems and equipment has made it difficult for managers at all levels to effectively carry out their respective responsibilities and make informed decisions on such things as funding, deployment, and logistical support of weapon systems and equipment. The program sponsors have been inconsistent in providing initial spare parts, ensuring that spare parts are added to the supply system, and keeping technical information updated for the field maintainers. Furthermore, program sponsors have not always adequately coordinated the completion of MWOs with other sponsors and with the field maintainers. The Army guidance on these processes is not clear, and the headquarters’ ability to ensure that existing policies and procedures were complied with was diminished when the responsibilities of configuration control boards were transferred to program sponsors. As a result, field maintainers have experienced difficulty in obtaining spare parts and current technical information and have experienced inefficiencies in getting their weapon systems and equipment modified. Program sponsors have varying amounts of information on their MWOs, ranging from none to fairly complete, and do not have ready access to information needed to coordinate with other program sponsors. Those program sponsors without a database are limited in managing their own programs. Field maintainers do not have easy access to information on MWOs that should have been installed or scheduled for future installation. At the unit level, the lack of information has manifested itself in various inefficiencies related to the coordination and scheduling of the installation of MWOs and has sometimes prevented units from knowing the configuration of their equipment. It is important that these modifications be done as efficiently as possible to minimize the reportable mission time the equipment is unavailable to units. The Army’s creation of a process action team to develop revised policies and procedures and its hiring of a contractor to examine automated information needs are steps toward correcting the weaknesses noted in this report. Improved management of this program would provide more assurance that improved capabilities are effectively and efficiently integrated into the Army’s equipment in the most expeditious manner. In considering the upcoming results of the MWO process action team, we recommend that the Secretary of the Army direct actions necessary to provide managers at all levels ready access to the information they need to oversee, manage, and implement the MWO program and to ensure compliance with Army policies and procedures; clarify regulations to ensure that program sponsors and supply system personnel provide proper logistical support for modified equipment, including (1) ordering appropriate initial spare parts when MWO kits are ordered, (2) updating technical information and providing it to units when MWO kits are installed, and (3) properly phasing out old spare parts and adding new items to its supply system; and establish an effective mechanism for program sponsors to coordinate and schedule their MWOs, among themselves and their customers, to reduce the amount of manpower and to minimize the reportable mission time required to complete the MWOs. In written comments on a draft of this report, DOD concurred with our findings and our recommendations (see app. I), acknowledging that improvements to the weapon system and equipment modification program were needed. Regarding our first recommendation, DOD agreed that managers at all levels need ready access to information to oversee, manage, and ensure compliance with Army policies and procedures. It noted that the process action team is developing a recommendation for an MWO integrated management information system that would obtain information from already established databases. DOD believes that such a system would provide a cost-efficient, nonlabor-intensive management tool to assist managers in tracking all facets of MWOs. Approval of a proposal for a new study effort to design and develop this system is pending. DOD also agreed with our recommendation that the Secretary of the Army clarify regulations to ensure that program sponsors and supply system personnel provide proper logistical support for modified equipment. DOD stated that Army Regulation 750-10 is being totally revised to clearly define roles and responsibilities, thereby making it a joint acquisition and logistics regulation that can be used by both communities. The revised regulation will adopt a modified materiel release process that would address the logistical support issues raised in our recommendation as well as other areas of concern identified by the process action team. Finally, DOD agreed with our recommendation that the Secretary of the Army establish an effective mechanism for program sponsors to coordinate and schedule their MWOs, among themselves and their customers. DOD stated that the revised Army Regulation 750-10 will address the issue of coordination between program sponsors and ensure that MWOs are completed at all units at one location at the same time where possible. We believe that these actions, if properly implemented, will help to further improve the effectiveness and efficiency of this program. We interviewed officials and reviewed program records at the Army Materiel Command, Alexandria, Virginia; the Army Aviation and Troop Command, St. Louis, Missouri; and the Army Tank-Automotive and Armament Command, Warren, Michigan, to identify how the MWO program works and to identify any problems. We also interviewed officials and reviewed records at the U.S Army Materiel Command; the Assistant Secretary of the Army for Research, Development and Acquisition; the Deputy Chief of Staff for Logistics; and the Deputy Chief of Staff for Operations at Army headquarters to determine their role in the modification program and what information they need to manage funding, resource allocations, deployment decisions, and supportability. We also interviewed Directorate of Logistics personnel and general and direct support personnel, reviewed records, and made on-site observations at Fort Hood, Texas; Fort Campbell, Kentucky; and Fort Carson, Colorado, to determine whether they were having any difficulties with the completion, scheduling, or supply support obtained for MWOs. In addition, we interviewed civilian and contractor personnel that provided regional aviation maintenance support at Fort Hood and Fort Campbell and reviewed records to determine whether they were experiencing similar problems. Furthermore, we interviewed officials at Anniston Army Depot, Alabama, and Camp Shelby, Mississippi, to determine how the MWO programs affect maintenance and overhaul programs. To evaluate how well the Army integrates its MWO program with the supply support system, we judgmentally selected 73 recent MWOs for aviation systems; weapons and tracked combat vehicle systems; and small arms. The Army does not have a complete list of MWOs, MWO kits, or the major components in the kits. It has automated data only on MWOs for high-dollar weapon systems. For the MWOs selected, we attempted to manually identify the major components in the kits, enter them into a database, and compare them to the Army’s automated inventory (April-June 1997 master data record) and budget justification (Sept. 1996 budget stratification report) records. We were not able to quantify the problems with the supply system identified in this report because (1) we could not identify a significantly large universe of new replacement items and match them with the related item being phased out of the system and (2) for the items identified, we could not consistently trace them into the automated inventory and budget justification records. Furthermore, we could not determine the extent of some of the problems identified through our field visits because some of the newer MWOs in our sample have not been operational long enough for their parts to fail. We have used the automated budget justification records and automated inventory databases in prior evaluations and reported that they contain significant errors regarding the relationship codes between secondary inventory items being added to the system and the replaced items. These databases are, however, the only available information on inventory and budget justifications for Army secondary items. We performed our review between January 1996 and August 1997 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretaries of Defense and the Army; the Director, Office of Management and Budget; and other interested parties. Please contact me on (202) 512-5140 if you have any questions concerning this report. Major contributors to this report are listed in appendix II. Gary Billen Mark Amo Leonard Hill Robert Sommer Robert Spence The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the Army's management of its modification work order (MWO) program, focusing on: (1) the availability of information needed by Army headquarters and field personnel to effectively oversee and manage the MWO program; (2) the availability of spare parts needed by personnel in the field to maintain modified equipment; and (3) field personnel's experiences in implementing the MWO program. GAO noted that: (1) Army headquarters officials and Army Materiel Command officials no longer have the information they need to effectively oversee and manage the MWO program; (2) this occurred because the centralized database to track installation and funding was discontinued; control over modification installation funding was transferred from the headquarters level to individual program sponsors; and the authority over configuration control boards, which ensured the completeness and compliance of MWOs with policy, was transferred to individual program sponsors; (3) as a result, Army officials do not have an adequate overview of the status of equipment modifications across the force, funding requirements, logistical support requirements, and information needed for deployment decisions; (4) the lack of information is also a problem at field units; (5) maintenance personnel have not always known which modifications should have been made to equipment or which modifications have actually been made; (6) in addition, maintainers of equipment have not always received the technical information they need in a timely manner to properly maintain modified equipment; (7) maintenance personnel in the field have had difficulty obtaining spare parts to maintain modified equipment because program sponsors frequently had not ordered initial spare parts when they acquired modification kits; (8) Army officials believe these problems occurred because they lost oversight and control of the program and policies and procedures were not being consistently applied by the individual program sponsors; (9) because spare parts have often not been available, maintenance personnel have made additional efforts to maintain modified equipment; (10) supply system personnel have not always followed policies and procedures to ensure that supply system records were updated to show the addition of new spare parts and the deletion of replaced spare parts; (11) as a result, the Army's budget for spare parts may not reflect accurate requirements for new components to repair and maintain modified weapon systems and equipment; (12) maintenance personnel in the field have also experienced a variety of problems in implementing MWOs; (13) maintainers have not always received adequate notice of pending modifications, and training schedules and equipment maintenance have been adversely affected; (14) GAO was told that various items of equipment did not always work together once some modifications were made; and (15) according to Army officials, these problems also occurred because of their loss of oversight and control.
Multimission stations, formerly referred to as small boat stations, are involved in all Coast Guard missions, including search and rescue, recreational and commercial fishing vessel safety, marine environmental response, and law enforcement activities such as drug and migrant interdiction. Search and rescue has traditionally been the stations’ top priority. However, after the terrorist attacks of September 11, 2001, the Coast Guard elevated the maritime homeland security mission to a level commensurate with the search and rescue mission. Congress’s actions to provide the Coast Guard with an additional $15.7 million for these stations in fiscal year 2003 was part of a longer-standing effort to address readiness concerns. In 2001, Congress directed the Department of Transportation’s Office of Inspector General (OIG) to conduct a thorough review of the operational readiness capability of stations, following a series of accidents involving search and rescue efforts initiated at these stations. The OIG reported that readiness levels at stations had been deteriorating for more than 20 years and were continuing to decline. In response, Congress provided an earmarked appropriation in fiscal year 2002 and directed the Inspector General to review the use of the earmarked funds. The OIG found that the Coast Guard generally complied with the intent of the earmark but also concluded that improving operational readiness at stations would require a substantial and sustained investment. The OIG also recommended that to improve congressional oversight of expenditures, the Coast Guard should make improvements to its accounting system to allow for the tracking of certain station expenditures. Since the additional funding efforts began, in fiscal year 2002, Coast Guard officials told us they have, among other actions, added approximately 1,100 personnel to stations, increased levels of personal protection equipment for station personnel, and started to replace old and nonstandard boats with new standard boats. In December 2002 the Coast Guard also developed, in response to a recommendation from the OIG in its 2001 report and at the direction of the Senate Appropriations Committee, a draft strategic plan to guide the recruiting and hiring of personnel. In its 2002 report, the OIG criticized the plan for being too general in nature, specifically regarding how and when the Coast Guard will increase staffing, training, equipment, and experience levels at stations. Because the Coast Guard’s automated databases are not set up in such a way that they can fully identify expenditure data at the station level, we were unable to fully determine expenditures for all four categories. However, through a combination of data runs and unit surveys performed at our request, the Coast Guard was able to estimate staffing and personnel retention expenditures, and develop actual expenditure data for personal protection equipment (PPE). Within these three categories, the Coast Guard estimates it spent at least $291 million in fiscal year 2003. The information available by category was as follows: Staffing: The Coast Guard incurred estimated costs of $277.6 million for 5,474 active duty personnel assigned to stations during fiscal year 2003. This figure does not include costs for the 1,657 reserve personnel assigned to stations, or an unknown number of auxiliary personnel. PPE: Reported expenditures for this category totaled $7.5 million. Personnel retention: Expenditure data for all aspects of this category are not available. However, in one specific category—reenlistment bonuses—the Coast Guard expended $5.9 million for bonuses to boatswain’s mates and machinists assigned to stations. Training: Coast Guard officials attempted to identify estimated costs of training station personnel at national training centers during fiscal year 2003 but could not provide reliable data for this category. Officials told us the Coast Guard has separate databases that track costs incurred by the national training centers, but do not have a database that can identify training costs expended on personnel after they have been assigned to stations. Further, expenditures incurred by stations in providing on-the-job training (a significant component of total training provided to station personnel) were not available because the Coast Guard, like many agencies, does not track time spent on this type of training. Using fiscal year 2002 data derived through similar analyses, we determined that estimated station expenditures for fiscal year 2003 exceeded fiscal year 2002 levels by at least $20.5 million—or $4.8 million more than the $15.7 million earmarked appropriation. Table 1 shows the differences in estimated expenditures (levels of effort) by fiscal year for the three categories that had available data. Only partial data were available on personnel retention, and no data were available on training expenditures. Although expenditure data for all personnel retention efforts were not available, the Coast Guard was able to provide annual expenditure data for reenlistment bonuses offered to selected multimission station personnel. Other information we gathered in discussions with Coast Guard personnel indicates that the Coast Guard’s levels of effort in station training also increased during fiscal year 2003. In fiscal year 2003, the Coast Guard increased the number of instructors and classrooms at two national training centers, which provide training to station and other personnel, in order to increase the number of total students graduated. Appendix I describes our methodology for developing these estimates, and appendix II contains a more detailed description of the data in each category. Because complete comparative data could not be identified for all four categories, we cannot say with certainty that Coast Guard expenditures for multimission stations in fiscal year 2003 were at least $15.7 million above fiscal year 2002 levels. However, we believe this is a reasonable conclusion based on the following: Although the staffing data provided to us are based on budget cost formulas, we determined that the data are sufficiently reliable for the purpose of demonstrating increases in staffing levels between the two years. Discussions with station officials indicate that station personnel have sufficient levels of PPE. In its fiscal year 2002 audit, the OIG reported that the Coast Guard did not provide PPE for 69 percent of the personnel added to stations during fiscal year 2002. Our visits to a limited number of stations—8 out of 188 stations—and discussions with station personnel, indicated that all active and reserve personnel assigned to these stations—even newly assigned personnel—had received what they considered to be an appropriate level of PPE (basic and cold weather). Although available quantitative data were limited for this category, over the past few years the Coast Guard has implemented a variety of financial incentives aimed at improving personnel retention. Training officers at the 8 stations we visited indicated that training for station personnel did not decrease in fiscal year 2003 compared with the prior year. In addition, in fiscal year 2003 the Coast Guard increased training resources in two areas—the boatswain’s mate training school increased its training output by over a third, and unit training provided by headquarters to station personnel also increased. The Coast Guard did not have adequate processes in place to sufficiently account for the expenditure of the entire $15.7 million earmarked fiscal year 2003 appropriation or to provide assurance that these earmarked funds were used appropriately, as set forth by federal management and internal control guidelines. The purpose of an earmark is to direct an agency to spend a certain amount of its appropriated funds for a specific purpose. Federal guidelines and government internal control standards indicate that agencies should account for the obligation and expenditure of earmarked appropriations both as a sound accounting practice and to demonstrate compliance in the event of an audit. The expectation that agencies will be able to effectively demonstrate compliance in their use of earmarked funds stems from the following: Office of Management and Budget Circulars: These circulars hold that agencies’ management controls should reasonably ensure that laws and regulations are followed. The Federal Managers’ Financial Integrity Act: This act establishes specific requirements regarding management controls and directs agency heads to establish controls to reasonably ensure that obligations and costs comply with applicable laws. Standards for Internal Control in the Federal Government: These standards specify that internal controls should provide reasonable assurance that an agency is in compliance with applicable laws and regulations. They also direct that internal controls and transactions should be clearly documented and the documentation should be readily available for examination. Further, the Department of Homeland Security (DHS), the parent agency for the Coast Guard, recently issued budget execution guidance that encourages component agencies to identify the obligation and expenditure of earmarked funds separately from other appropriated funds. (This guidance was issued in fiscal year 2004 after the Coast Guard had obligated the fiscal year 2003 earmark.) In response to a recommendation made in our recent report on the reprogramming of Federal Air Marshal Service funds, DHS has agreed to make this a requirement. The Coast Guard told us at the onset of our review that it did not have adequate processes in place to collect data with respect to earmarked expenditures. Although officials had taken steps to account for PPE expenditures (because purchase receipts could be easily tracked), they did not have adequate processes in place to account for earmarked funds spent on staffing and training needs at the station level. Consequently, the Coast Guard could not demonstrate conclusively that it was complying with the earmark. Basically, the Coast Guard’s databases were not designed for this purpose and would have to be modified to provide actual expenditure data for stations, according to Coast Guard officials. On the basis of lessons learned from the OIG’s audit in fiscal year 2002, which faulted the Coast Guard for not having cost accounting systems in place to allow for the tracking of certain multimission station expenditures, Coast Guard officials developed a plan to show how various allocations would add up to $15.7 million if expended. The plan, although useful as an indicator of the Coast Guard’s intentions, is not sufficient to show that the Coast Guard had expended the earmarked appropriation as directed. Coast Guard officials also told us that, in response to the OIG’s 2002 recommendation to allow for the tracking of certain station expenditures, they are assisting DHS in developing a new enterprise-wide financial system called “electronically Managing enterprise resources for government effectiveness and efficiency” (eMerge). As part of the overall system requirements, the Coast Guard expects that eMerge and the Coast Guard was unable to provide us with system specifications prior to the issuance of this report. On the basis of available data and other information, the Coast Guard appears to have met the Congress’s requirement to spend at least $15.7 million more on multimission stations in fiscal year 2003 than in fiscal year 2002. However, the Coast Guard does not have adequate processes in place to track actual expenditures related to earmarks. Rather, agency officials could provide only estimates for much of the station expenditures. Without the ability to accurately and completely account for these expenditures, the Coast Guard cannot assure that it complied with the earmark. Moreover, Congress’s ability to hold the Coast Guard accountable for future earmarks is seriously diminished. In light of our recent recommendation to DHS on the need to track earmarks—and its subsequent concurrence—we believe the Coast Guard should take immediate steps to ensure that future accounting systems include the capability to track earmarks. To improve the Coast Guard’s ability to respond to congressional oversight and to provide greater assurance that earmarked funds are used appropriately, we recommend that the Secretary of Homeland Security direct the Commandant of the Coast Guard to develop, in accordance with the fiscal year 2004 departmental guidelines, processes to accurately and completely account for the obligation and expenditure of earmarked appropriations. We requested comments on a draft of this report from the Secretary of Homeland Security or his designee. On May 14, 2004, Coast Guard officials, including the Chief, Office of Budget and Programs, provided us with oral comments, with which the DHS GAO Liaison concurred. Coast Guard officials generally agreed with the facts and our recommendation to better track earmarked expenditures. We did not review the Coast Guard’s financial databases to determine if modifications to them would be necessary to better track earmarked expenditures (obligations). Coast Guard officials, however, expressed concern that developing better procedures to track some station expenditures (obligations), such as those for staffing or training, will prove challenging and could be costly due to the need to significantly modify their financial systems. Officials stated that accounts are centrally managed and specific expenditures would not be easily tracked at the station level. The Coast Guard officials said they plan to explore this issue more thoroughly and to examine how organizations with comparable activities have overcome similar obstacles to tracking earmarked funds. The officials also provided a number of technical clarifications, which we incorporated where appropriate. We will send copies of this report to interested congressional committees and subcommittees. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report or wish to discuss the matter further, please contact me at (415) 904-2200 or Randall B. Williamson at (206) 287-4860. Additional contacts and key contributors to this report are listed in appendix III. We used a variety of approaches in our work to determine the amount of the general appropriation the Coast Guard expended on multimission stations in fiscal year 2003 across the four areas covered by the earmark— staffing, personal protection equipment (PPE), personnel retention and training—and whether this amount exceeded by $15.7 million the level of effort expended in fiscal year 2002. Because Congress directed that we review the amount of general appropriations expended on station readiness needs, we did not review expenditures of funds received through supplemental appropriations. We determined at the outset of our work that Coast Guard databases did not contain information that would allow us to fully report on station expenditures for the four earmark categories. To identify available information and possible limitations of the information, we worked extensively with Coast Guard headquarters officials from the Offices of Budget and Programs; Financial Analysis; Boat Forces; Resource Management; Workforce Management; Personnel Command; and Workforce Performance, Training and Development. We also obtained documentation from headquarters, stations, groups, and districts. After reviewing the reliability of available data and the feasibility of Coast Guard officials’ proposals for gathering additional data, we agreed on a combination of expenditure and allocation data, which would be collected through special data runs, analyses, and unit surveys. Coast Guard officials provided data for three of the four categories. Although officials attempted to develop information on training costs, they were not able to produce reliable data. Some of the information we needed was obtained not at headquarters but at specific Coast Guard sites, which we judgmentally selected according to size, location, and type. The specific data and analyses used to develop estimates on each of the four categories, were as follows: Staffing: To determine the number and cost of personnel assigned to multimission stations, we requested Coast Guard personnel expenditure data for fiscal years 2002 and 2003, but we were told that expenditure data were not available at the station level. To develop estimated staffing costs, Coast Guard officials merged information from personnel and position databases to identify the number of personnel assigned to stations and then applied a personnel cost formula to arrive at total estimated costs. Developing estimates was complicated because the fiscal year 2002 data were developed from a different database than the fiscal year 2003 data, and because the Coast Guard has more personnel assigned to stations than actual authorized (or funded) positions, a variance that requires periodic adjustment of the databases. However, after discussing these factors at length with Coast Guard officials, we determined that the data developed by the Coast Guard were sufficiently reliable for the purpose of providing estimates of expenditures for fiscal years 2002 and 2003. The methodology and process for developing the data were contributed to by the following Coast Guard offices: Budget and Programs, Resource Management, Workforce Management, and Personnel Command. PPE: To obtain fiscal year 2003 expenditure data for this category, we asked the Coast Guard to survey all 188 stations and their oversight units. Each station and unit was asked to provide the total amount of fiscal year 2003 funds spent on PPE for personnel assigned to the station during the year. These totals included expenditures made for station personnel at the group and district levels as well. To verify the accuracy of these data, we reviewed original expenditure documentation for a judgmentally selected sample of 29 stations. On the basis of this documentation, we independently quantified PPE expenditures for each station. Our count of total PPE purchases at the 29 stations was 9 percent higher than the total provided by the Coast Guard (our count was 4 percent less than the Coast Guard’s after removing expenditures for one outlier station). Coast Guard officials attributed the difference to errors made by station personnel when compiling the expenditure data. As a result of these differences, however, we refer to the total expenditure for fiscal year 2003 as an estimate. Because Coast Guard officials considered gathering expenditure data for fiscal year 2002 as too labor intensive for station personnel, given their current workloads, we used the Coast Guard’s data on planned PPE expenditures for fiscal year 2002. After reviewing possible limitations in the PPE data provided, we determined that the data provided were sufficiently reliable for the purpose of providing estimates of expenditures. The PPE planning data were provided to us by the Offices of Boat Forces and Budget and Programs. Personnel retention: We were not able to determine total retention expenditures because the Coast Guard does not specifically track these costs, and retention efforts encompass a diverse array of direct and indirect activities. We were able to identify certain direct activities— selective reenlistment bonus expenditures for multimission stations and various financial incentives available to Coast Guard personnel— and some indirect incentives. After reviewing how data provided by the Personnel Services Center on selective reenlistment bonus expenditures were collected and maintained, we determined that the data were sufficiently reliable for the purposes of this report. The personnel retention expenditure data were provided to us by the Office of Budget and Programs. Training: The Coast Guard was unable to provide actual or estimated expenditure data for training multimission station personnel in fiscal years 2002 and 2003. Officials from the Office of Budget and Programs and the Office of Workforce Performance, Training, and Development told us at the outset of our review that they would not be able to identify total training costs because the Coast Guard does not track the amount of time station personnel devote to on-the-job training (which accounts for a significant amount of total training). Headquarters officials attempted to obtain data on the estimated annual costs for training station staff at the Coast Guard’s national training centers by cross-referencing data from multiple databases and applying a cost formula. However, Coast Guard officials identified a number of serious anomalies in the data and concluded the data were too unreliable to be used. To determine whether the Coast Guard had adequate processes in place to account for the expenditure of the $15.7 million earmarked appropriation, we interviewed and obtained documentation from stations, groups, and districts. We also interviewed and obtained documentation from officials in the following headquarters offices: Boat Forces, Budget and Programs, and Financial Analysis. Further, we studied the Coast Guard’s funding plan, which showed how the earmark was intended to be spent. We also reviewed federal management guidelines and government internal control standards to identify earmark accountability requirements that apply to agencies. The $15.7 million earmark presented to the Coast Guard in its fiscal year 2003 appropriation called for funds to be spent across four categories of multimission station needs—staffing, PPE, personnel retention, and training. In determining the amount of funds spent by the Coast Guard in 2003 on station needs and whether this amount exceeded the fiscal year 2002 level of effort by $15.7 million, we also developed cost information for three of the four categories. Coast Guard officials attempted but were unable to develop reliable data on the cost of training station personnel during fiscal years 2002 and 2003. This appendix has two main sections. The first presents additional information about estimated station expenditures in the areas of staffing, PPE, and personnel retention in fiscal year 2003, and the second contains additional information about the changes that occurred between fiscal years 2002 and 2003. Using a combination of estimated and actual expenditure data, we determined that estimated fiscal year 2003 costs for staffing, PPE, and personnel retention efforts at stations amounted to at least $291 million. The Coast Guard could not provide us with the actual amount of fiscal year 2003 appropriation funds spent on station staffing because the agency’s automated databases do not fully identify personnel expenditures at the station level. However, using a combination of budget and personnel data, officials were able to estimate that in fiscal year 2003 the Coast Guard incurred costs of $277.6 million to support 5,474 active duty station personnel. This estimate does not include costs for the 1,657 reserve personnel assigned to stations in fiscal year 2003, nor does it include the costs of volunteer auxiliary personnel who assisted in station operations during the year. The Coast Guard did not calculate estimated expenditures for reservists because of the complex and labor-intensive nature of the analysis. Coast Guard officials determined that the agency spent approximately $7.5 million in fiscal year 2003 on PPE for station personnel. As shown in table 2, the cost of a total basic PPE outfit in fiscal year 2003 was $1,296. The cost of a cold weather PPE outfit, which is used by personnel working at stations where the outdoor temperature falls below 50 degrees Fahrenheit, was $1,431. (Figure 1 shows a station crew member in cold weather PPE.) A May 2002 Coast Guard Commandant directive emphasized the importance of proper supplies and use of PPE as one of the top priorities of Coast Guard management. In this directive, the Commandant cited an internal research report that attributed 20 percent of the total risk facing boat personnel to exposure to extreme weather conditions. The directive also states that the use of appropriately maintained PPE could improve Coast Guard’s operational capability. The Coast Guard provided data demonstrating how it promotes personnel retention through a variety of direct and indirect incentives. Direct incentives include financial benefits that personally benefit the individual, while indirect incentives include projects, such as facility improvements, that may indirectly contribute to retention by increasing staff morale. Coast Guard officials provided expenditure data for selected direct incentives provided to station personnel in fiscal year 2003 because officials could not quantify the total amount of funds expended on direct incentives. Likewise, the total amount expended on indirect incentives cannot be readily identified because of the numerous and varied nature of the efforts. Coast Guard’s direct financial incentives include selective reenlistment bonuses. During fiscal year 2003, the Coast Guard spent $5.9 million on 312 selective reenlistment bonuses for station personnel—$4.2 million of this went to boatswain’s mates while the remaining $1.7 million went to machinery technicians. A variety of other financial benefit improvements were also recently implemented: Between fiscal year 2003 and fiscal year 2004 the Coast Guard increased the surfman pay premium by 33 percent. Since fiscal year 2000 the average portion of housing costs paid by personnel has decreased annually, going from 18.3 percent in fiscal year 2000 to 3.5 percent in 2004; in 2005 this expense will be reduced to zero. Since fiscal year 2002 enlisted personnel have been entitled to a basic allowance for food. Before fiscal year 2002 they received no funds for food purchased outside of a Coast Guard galley (kitchen). Since fiscal year 2002 first-term enlisted personnel have received a “dislocation allowance” that provides funds for rental deposits and other incidentals that may occur when personnel are required to move. Since fiscal year 2003 junior personnel have been able to ship greater weights of household goods when transferring stations. During fiscal year 2004 the death gratuity issued to assist survivors of deceased Coast Guard active personnel doubled. Multiple indirect Coast Guard efforts also serve as personnel retention tools by improving staff morale. At our request, Coast Guard officials asked 29 (15 percent) of the 188 multimission stations to provide data on estimated expenditures incurred for projects that indirectly contributed to staff retention. For the 24 stations that responded, infrastructure and lifestyle improvements totaled over $350,000 in fiscal year 2003. Improvements cited by multimission stations include such items as new furniture, sports equipment, televisions, satellite TV service, and entertainment systems. According to a Coast Guard official, the source of funds for these improvements can be station, group, or district operating budgets or donations by Coast Guard support groups. Table 3 shows examples of some of the projects cited by the 24 survey respondents. While we could not determine with certainty the difference in estimated expenditures (levels of effort) expended on stations between fiscal years 2002 and 2003 because of financial system limitations, the information available suggests that the difference amounted to at least $20.5 million. The following discusses estimated differences in fiscal year 2002 and 2003 staffing, PPE, and personnel retention costs for multimission stations. As shown in Table 4, the Coast Guard increased staffing at multimission stations by an estimated 466 personnel (9.3 percent) in fiscal year 2003. The estimated cost of this staffing increase was $14.4 million above the level of effort expended for staffing in fiscal year 2002. According to the Coast Guard, the agency estimates it spent approximately $5 million more for PPE than it planned to spend during fiscal year 2002. We used fiscal year 2002 planned allocation data for this expenditure comparison because Coast Guard officials considered a survey of stations to collect fiscal year 2002 expenditure data—similar to the survey conducted for the fiscal year 2003 expenditure data—too burdensome for station personnel, given their current workload. Coast Guard officials told us that historically the amount of funds allocated for station PPE at the beginning of a fiscal year is not enough to fund PPE for all station personnel estimated to need it during the year. The Coast Guard’s method for allocating PPE funds to stations uses the number of positions authorized to stations as a primary factor in determining the amount of funds allocated to individual stations. Because Coast Guard stations have more personnel assigned to them than authorized positions, in the past personnel not assigned to an authorized position were typically not included in PPE allocation calculations. To address this shortfall, the Coast Guard initially planned to allocate $3 million of the earmarked funds in fiscal year 2003. During 2003 the Coast Guard had added another $2.6 million of the earmarked funds, bringing the total to $5.6 million. Reenlistment bonuses issued to boatswain’s mates and machinery technicians assigned to stations increased by $1.1 million from fiscal year 2002 to fiscal year 2003. During fiscal year 2002, the Coast Guard issued $4.8 million in bonuses to the two classes of station personnel; the amount issued in fiscal year 2003 rose to $5.9 million. Expenditures for other, more indirect, forms of retention activities, such as station infrastructure improvements, are not tracked annually and therefore are not available for comparative purposes. The Coast Guard was not able to identify training costs for multimission station personnel for fiscal year 2002 or fiscal year 2003 despite extensive efforts. Officials told us the Coast Guard has separate databases in place to track training costs by national training center, but it does not have a database that identifies costs for station personnel. The Coast Guard conducted several queries from available databases but the resulting data were not accurate. The lack of available training cost data precluded us from making a comparison of annual expenditure data in this area. However, some information indicates that levels of effort expended on training station personnel increased in fiscal year 2003. For example, Coast Guard’s boatswain’s mate training school increased its training output by over a third in fiscal year 2003. In addition to those named above, Cathleen A. Berrick, Barbara A. Guffy, Dorian R. Dunbar, Ben Atwater, Joel Aldape, Marisela Perez, Stan G. Stenersen, Michele C. Fejfar, Casey L. Keplinger, Denise M. Fantone, and Shirley A. Jones made key contributions to this report.
The Coast Guard conducts homeland security and search and rescue operations from nearly 200 shoreside stations along the nation's coasts and waterways. After several rescue mishaps that resulted in the deaths of civilians and station personnel, Congress recognized a need to improve performance at stations and appropriated additional funds to increase stations' readiness levels. For fiscal year 2003, the Coast Guard received designated funds of $15.7 million specifically to increase spending for stations' staffing, personal protection equipment (such as life vests and cold weather protection suits), personnel retention, and training needs. Congress directed GAO to determine if the Coast Guard's fiscal year 2003 outlays for stations increased by this amount over fiscal year 2002 expenditure levels. GAO also assessed the adequacy of the processes used by the Coast Guard to account for the expenditure of designated funds. According to our analyses of available data, and anecdotal and other information, it appears that the Coast Guard spent at least $15.7 million more to improve readiness at its multimission stations in fiscal year 2003 than it did the previous year. However, this statement cannot be made with certainty, because the Coast Guard's databases do not fully identify expenditures at the station level. GAO worked with the Coast Guard to develop expenditure estimates for the stations, using budget plans and available expenditure data, and this effort produced full or partial estimates for three of the four categories--staffing, personal protection equipment, and personnel retention efforts. For these three categories, fiscal year 2003 expenditure estimates were at least $20.5 million more than the previous year, or about $4.8 million more than the $15.7 million designated appropriation. Although estimates could not be developed for training expenditures, other available information indicates that training levels increased in fiscal year 2003. Taken together, these results suggest that the Coast Guard complied with Congress' direction to increase spending for stations by $15.7 million. Federal management guidelines and internal control standards call for greater accountability for designated--earmarked--appropriations than was provided by the processes the Coast Guard had in place to track these funds. The purpose of an earmark is to ensure agencies spend a certain amount of their appropriated funds for a specific purpose. Guidelines and standards indicate that agencies should account for the obligation and expenditure of earmarked appropriations--a step the Coast Guard thoroughly implemented only for personal protection equipment. Coast Guard officials developed a plan showing how they planned to spend the earmark, but such a plan, while useful as an indication of an agency's intentions, is not sufficient to show that the earmark was expended in accordance with congressional direction.
Our prior report recommended that DOD immediately reverse the $615 million of illegal and otherwise improper closed account adjustments identified in the report and determine the correct accounting for these adjustments after the reversal. Of the $615 million of illegal and otherwise improper adjustments, DOD has agreed that $592 million, or about 96 percent, of the adjustments should not have been made and has reversed the adjustments. However, because of DOD’s long-standing accounting accuracy problems, in many cases, reversing the transactions brought to light additional accounting problems that will require detailed reviews to determine the accounting actions necessary to correct the reversed transactions. As a result, neither DOD nor we can determine how much remains to be corrected as a result of reversing the adjustments. Table 1 provides additional details on DOD’s reversal of the $615 million fiscal year 2000 illegal and otherwise improper closed account adjustments. For the remaining $23 million that has not been reversed, DOD provided us with additional documentation indicating that $8 million of the adjustments were proper and do not need to be reversed. We still consider the remaining $15 million to be unnecessary or unsupported adjustments since DOD has not provided sufficient support to show otherwise. The $592 million of illegal and otherwise improper closed account adjustments discussed in our earlier report that have now been reversed involved 45 contracts. For 30 of the 45 contracts, the reversals identified additional accounting errors that must also be corrected. The 30 contracts include over $457 million (77 percent) of the $592 million in reversed transactions. Because of the complexity of the contracts and time it takes to conduct a complete reaudit, officials at the Defense Finance and Accounting Service’s (DFAS) Columbus Center estimate that it will take over 21,000 hours to correct the accounting for the 30 contracts. For example, for one contract we found that DFAS Columbus had made $210 million of closed account adjustments that should not have been made because the initial disbursement was recorded against the correct ACRN on the contract. The reason given for the adjustment was that DFAS Columbus could not pay a November 1999 invoice from a contractor for $685,000 because the cited ACRN on the invoice did not have sufficient funds. The inability to pay the invoice prompted DFAS to conduct an audit of the contract that resulted in over $590 million of adjustments to closed appropriation accounts. Our earlier audit found that of the $590 million of adjustments, $210 million were unnecessary and should not have been made because the actual disbursements—some of which were made over 10 years earlier—had been recorded correctly. The $210 million was part of the $615 million of illegal or otherwise improper transactions we identified in our earlier audit. In response to our recommendation that DOD reverse and correct the $210 million of unnecessary adjustments, DFAS Columbus reversed all $590 million of the closed account adjustments. According to DFAS officials, when reversing adjustments of this size, they generally have to reverse all the transactions involved with an adjustment not just the canceled ones. After the adjustments were reversed, other errors were created that must now be researched and corrected. For example, for this one contract, the reversal of the contract’s accounting records showed that 63 contract ACRNs had negative unliquidated obligations (NULO) totaling $85.4 million. DFAS Columbus estimates that it will take about 2,300 hours to reaudit and correct the contract. Our earlier review of another contract found that DFAS Columbus had recorded an adjustment that illegally moved $79 million of disbursement charges from fiscal years 1993 through 1995 research and development appropriations to charges against a canceled fiscal year 1992 research and development appropriation. According to the contract files, the adjustment was made to redistribute the disbursement charges in accordance with the “pay oldest funds first” payment terms specified in the contract. However, we found that the redistribution was illegal because it moved disbursement charges back to an appropriation account that had closed several months before the initial disbursement was made. For example, the initial $79 million disbursement occurred in February 1999, but the adjustment resulted in a charge against an appropriation that canceled 4 months earlier on September 30, 1998. DOD agreed that the adjustment was illegal and reversed the $79 million. The reversal identified other accounting errors on the contract that now must be corrected. According to DFAS contract accounting records, as of April 2002, the contract had NULOs totaling over $100 million that will need to be researched and corrected. DFAS Columbus officials estimate that a reaudit of this contract will take over 1,850 hours to complete. DOD officials told us they plan to complete all 30 reaudits to correct the fiscal year 2000 illegal and otherwise improper adjustments by September 30, 2002. In addition to DFAS’s reaudit of the contract, Air Force officials have also initiated an investigation into the circumstances surrounding the initial $79 million illegal adjustment to determine if personnel responsible for monitoring and administering the contract acted improperly, including the possibility that the adjustments may have resulted in Antideficiency Act violations. Air Force officials told us that they plan to complete the investigation and issue their report before the end of fiscal year 2002. We previously reported that the DFAS contract reconciliation system (CRS) and other controls necessary to ensure that adjustments to closed appropriation accounts were proper were not in place. We noted that DOD was in the process of upgrading CRS and correcting other control problems that significantly contributed to many of the illegal or otherwise improper adjustments to closed accounts. However, because DOD did not complete many of these actions until the end of fiscal year 2001, controls were not in place to ensure that the $1.9 billion of closed account adjustments made during fiscal year 2001 were legal and proper. Our evaluation of $291 million (15 percent) of DOD’s reported $1.9 billion fiscal year 2001 closed appropriation account adjustments found that $172 million (59 percent) were either illegal or otherwise improper. These adjustments should not have been made because the initial disbursements (1) occurred after the appropriation being charged had already canceled, (2) occurred before the appropriation charged was enacted, or (3) were charged to the correct appropriation in the first place and no adjustment was necessary. Also included in the $172 million of illegal or otherwise improper closed account adjustments were adjustments that were not sufficiently documented to establish that they were proper. These adjustments were considered improper because agencies must be able to provide documentation to show that the adjustments are legal and that they changed an incorrect charge to a correct one. Table 2 provides additional details on the $172 million of adjustments that should not have been made. DOD officials agreed to reverse and correct the $172 million of illegal and otherwise improper closed account adjustments. The remaining $119 million of the $291 million of adjustments was for adequately documented corrections of errors that DOD had made over the years and, therefore, were not in violation of appropriations law or otherwise improper. DOD officials told us they plan to review another $1.1 billion of fiscal year 2001 closed account adjustments in addition to the $291 million of closed account adjustments that we already reviewed. According to the officials, the additional $1.1 billion of adjustments were selected based on various factors including large dollar amounts or indications that the adjustments may be illegal. The officials noted that completion of the review of additional adjustments would result in detailed reviews of $1.4 billion (about 74 percent) of the total $1.9 billion of the closed account adjustments made during fiscal year 2001. According to the officials, they estimate that the additional reviews will involve several hundred contracts and about 1,000 closed account adjustments. They plan to have the additional reviews and reversals of any illegal or otherwise improper adjustments completed by December 31, 2002. However, the officials told us that because there are so many contracts that may have to be reaudited to correct the accounting, they do not plan to have the reaudits and corrections for fiscal year 2001 closed account adjustments completed until September 2004. In our July 2001 testimony and report, we pointed out that DOD did not have adequate systems, controls, and managerial attention to ensure that the $2.7 billion of fiscal year 2000 adjustments affecting closed appropriation accounts were legal and otherwise proper. Our review disclosed that CRS routinely processed billions of dollars of closed appropriation account adjustments without regard to the requirements of the 1990 account closing law. Further compounding this system deficiency was the lack of DOD oversight on how contract modifications were written and processed, which changed the payment terms of some contracts to improperly make available current and expired funds. As discussed earlier, our follow-on review of fiscal year 2001 closed account adjustments found little improvement over fiscal year 2000. As a result, DOD still could not ensure that closed account adjustments made during fiscal year 2001 were legal and otherwise proper. However, once the controls were fully implemented at the beginning of fiscal year 2002, we found that the first 6 months of fiscal year 2002 closed account adjustments dropped by about 80 percent to $200 million when compared with the same 6 months during fiscal year 2001. In May 2001, DOD began implementing CRS controls to identify and prevent illegal backward adjustments. This control compares the actual disbursement date with the appropriation involved in the adjustment to ensure that the adjustment does not result in moving disbursement charges back to an appropriation that had been canceled before the actual disbursement was made. In September 2001, DFAS upgraded CRS to identify and prevent illegal adjustments from moving disbursement charges forward to an appropriation that had not yet been enacted at the time the initial disbursement was made. In addition to upgrading CRS to identify and prevent illegal closed account adjustments, DOD also changed the CRS default reallocation of adjusting payments from oldest funds first to proration. Under the oldest funds first reallocation method, CRS would change disbursements charged to current and expired appropriation accounts to charges against older appropriation accounts even if the initial disbursement charges were correct. Because the DFAS contract payment system, commonly known as MOCAS (Mechanization of Contract Administration Services), prorated payments across various fund cites in the contract if no payment terms were specified in the contract, this change was intended to reduce errors by making both MOCAS and CRS payment allocation defaults the same. Previously, problems with payment reallocations arose during contract reconciliation when payments that MOCAS had initially allocated across various ACRNs on a pro rata basis were redistributed by CRS across ACRNs on an oldest funds first basis. When this occurred, the CRS payment redistributions would differ substantially from how MOCAS had originally applied the payments. As our previous audit showed, these situations created significant problems by moving payment charges from correct ACRNs to incorrect ACRNs on the contract. For example, in one case, DOD initiated a contract reconciliation because there were insufficient funds remaining on an ACRN to pay a $685,000 contractor invoice, and this redistribution process resulted in moving $210 million of correct payment charges to incorrect ACRNs. According to DFAS Columbus officials, supervisory personnel must now approve any deviation from the CRS default program before CRS controls can be overridden to reallocate disbursements in a manner other than proration. DOD’s reported closed account adjustments during the first 6 months of fiscal year 2002 totaled about $200 million, or about 80 percent less than the over $1 billion of closed account adjustments DOD reportedly made during the same 6-month period of fiscal year 2001. According to DFAS officials, they believe that the significant decline in closed account adjustments is a direct result of increased DOD management and employee emphasis on resolving the problems identified in our earlier report that contributed to illegal and otherwise improper closed account adjustments. While DFAS’s controls had greatly reduced closed account adjustments during the first 6 months of fiscal year 2002, our analysis of closed account transactions found that $253,212 of illegal closed account adjustments had been processed from October 1, 2001, through March 31, 2002. These illegal adjustments moved disbursement charges back to appropriations that had canceled before the initial disbursements occurred. We found these adjustments had processed through a DFAS Columbus computer terminal that did not properly identify and prevent these types of illegal adjustments. DFAS officials could not explain why the computer terminal was not operating properly but took immediate action to upgrade it with the appropriate controls. The officials agreed to reverse and correct the $253,212 of illegal adjustments. Our analysis of subsequent closed account adjustments reported after the upgrade did not identify any additional illegal closed account adjustments. Our earlier testimony and report pointed out that DOD’s illegal and otherwise improper closed account adjustments resulted from the lack of basic controls and managerial attention required to properly account for its disbursements consistent with the 1990 account closing law. We also noted that DOD had been aware since 1996 that one of its major systems allowed for disbursements to be charged in a way that was inconsistent with the law, but had done nothing to fix the problem. This lack of fundamental controls and management oversight fostered an atmosphere in which responsible DOD contracting and accounting personnel took it for granted that it was an acceptable practice to adjust the accounting records to use unspent canceled funds on a contract in order to maximize the use of appropriated funds—a process that we concluded, and DOD agreed, was illegal. We stated that DOD would need to effect changes to its systems, policies, procedures, and the overall weak control environment that fostered the $615 million of illegal and otherwise improper adjustments made during fiscal year 2000. To do this, we pointed out that DOD top management must clearly demonstrate its commitment to adhering to the account closing law and eliminate the abuses of appropriations law. The 80 percent reduction of closed account adjustments during the first 6 months of fiscal year 2002 is an indication that, in the short term, DOD policies, procedures, and management commitment aimed at reducing the amount of illegal and otherwise improper closed account adjustments are having the desired effect. However, DOD’s inability to accurately account for and report on disbursements overall are long-term, major problems that are pervasive and complex in nature. For example, for fiscal year 1999, DFAS data showed that almost $1 of every $3 in contract payment transactions was for adjustments to previously recorded payments— $51 billion of adjustments out of $157 billion in transactions. Some of the key causes of these adjustments—for both closed and unclosed accounts— relate to the complex accounting for contracts along with frequent changes in payment allocation terms. Over the years, we have issued numerous reports discussing DOD’s financial management problems, and we have designated DOD financial management as a high-risk area since 1995. The following discussion on DOD’s use of ACRNs and changes in contract payment allocations is illustrative of the convoluted process that contributes to the need to adjust accounting records to correct errors. Contracts can be assigned anywhere from 1 to over 1,000 ACRNs to accumulate appropriation, budget, and management information. Our review of fiscal years 2000 and 2001 closed account adjustments found that, in many cases, the contracts had large numbers of ACRNs. According to DFAS Columbus officials, numerous ACRNs and changes in payment allocations create payment problems by increasing the amount of data that must be entered and opportunities for errors. These problems also lead to costly and extensive contract reconciliations. For example, our review of fiscal year 2001 closed account adjustments on a Navy contract valued at about $38 million found that the contract contained 548 ACRNs and had been modified over 150 times. Also, according to DFAS Columbus’ reconciliation staff, the contract had received two reconciliations, one of which in 1998 produced 15,322 accounting adjustments. In total, we found about $3 million of fiscal year 2001 closed account adjustments for this contract were not adequately supported and, thus, should not have been made. In discussing the contract’s improper closed account adjustments with DFAS Columbus officials, they agreed that the adjustments were not proper and agreed to reverse and correct them. Because of the large number of ACRNs and contract modifications involved, they estimate that it will take over 9,000 hours to complete the contract audit. Our combined review of the 101 contracts included in our detailed review of fiscal years 2000 and 2001 closed account adjustments found that there were 7,440 ACRNs on the 101 contracts—an average of about 74 ACRNs per contract. As table 3 shows, 38 of the 101 contracts (38 percent) had 51 or more ACRNs. We did not determine for each of these contracts why and for what purpose the numerous ACRNs were being used. However, it is clear that simplified contract accounting will be a key element to reform DOD’s financial management. For example, as we pointed out in our July 2001 report, even a simple purchase could cause extensive and costly rework if assigned numerous ACRNs. We noted that a $1,209 Navy contract for children’s toys, candy, and holiday decorations for a child care center was written with most line items (e.g., bubble gum, tootsie rolls, and balloons) assigned separate ACRNs. A separate requisition number was generated for each item ordered, and a separate ACRN was assigned for each requisition. In total, the contract was assigned 46 ACRNs to account for contract obligations against a single appropriation. To record this payment against the one appropriation DFAS Columbus had to manually allocate the payment to all 46 ACRNs. In addition, the contract was modified three times—twice to correct funding data and once to delete (deobligate) the funding on the contract for out-of-stock items. The modification deleting funding did not cite all the affected ACRNs. DFAS Columbus made errors in both entering and allocating payment data, compounding errors made in the modification. Consequently, DFAS Columbus allocated payment for the toy jewelry line item to fruit chew, jump rope, and jack set ACRNs—all of which should have been deleted by modification. Contract delivery was completed in March 1995, but payment was delayed until October 1995. DFAS Columbus officials acknowledged that this payment consumed an excessive amount of time and effort when compared to the time to process a payment charged to only one ACRN. A single ACRN would also have significantly reduced the amount of data entered into the system and the opportunities for errors. Further compounding the problem of numerous ACRNs are changes in how payments are to be allocated across various ACRNs on a contract. For example, our review of an Air Force contract that had 50 ACRNs contained about $126 million of closed account adjustments of which we found that about $100 million (79 percent) were illegal or otherwise improper. Further, the contract had been modified 292 times for various reasons, including changes to how payments were to be allocated across the various ACRNs. For example, the following instructions were included in contract modifications to specify payment instructions for special ACRN XB—one of several special ACRNs on the contract. Contract modification 94 dated October 22, 1993, stated that, “During FY90 pay FY90 funds first until exhausted and during FY91 pay FY91 funds first until exhausted. After these funds are exhausted, pay from the oldest ACRNs first.” Two years later, contract modification 126 added additional payment terms for special ACRN XB as follows: “During FY90 pay FY90 funds first until exhausted and during FY91 pay FY91 funds first until exhausted. During FY94 pay FY88 funds first until exhausted. After these funds are exhausted, pay from oldest ACRNs first.” In June 2000, modification 160 provided more payment instructions for special ACRN XB. The modification noted that special ACRN XB consisted of funds from both the United States and North Atlantic Treaty Organization (NATO). The payment instructions specified that payments were to be made using the oldest U.S. funds before using NATO funds. According to a July 2000 Air Force memorandum from the Air Force Materiel Command’s Deputy Director of Contracting, the special ACRNs were not to be added to any existing contracts or used in new contracts. The Deputy Director noted that the Air Force still had over 1,300 special ACRNs in the system related to the older contracts, and that there was evidence that special ACRNs were still being created or used for new contract line items or subcontract line items. In discussing this memorandum with responsible Air Force contracting officials, we were told that the Air Force no longer uses special ACRNs and that once all the contracts that currently contain special ACRNs are closed out, errors or other accounting problems caused by this type of contract funding should no longer be a problem. DFAS Columbus officials acknowledged that the combination of numerous ACRNs and modifications that change contract payment allocation terms makes it difficult to maintain accurate payment records. They agreed that the $100 million of illegal and otherwise improper closed account adjustments for the Air Force contract discussed above should not have been made. They told us they plan to reverse and correct the illegal and otherwise improper closed account adjustments on the contract as part of their overall effort to correct fiscal year 2001 closed account adjustments. Because of the numerous ACRNs and contract modifications on the contract, DOD estimates that it will take over 1,500 hours to completely correct the accounting for this contract. In discussing the issues of payment errors caused by numerous ACRNs and changing contract payment allocation terms, military service contracting officials agreed that in the past their contracts contained numerous ACRNs and modifications to change payment allocations. They told us that during the last 2 or 3 years, they have started to write contracts to include more specific payment allocation terms, which should make it much easier for DFAS Columbus to pay contractors without making errors that require subsequent adjustments. Further, on October 1, 2001, the Under Secretary of Defense for Acquisition, Technology, and Logistics issued a memorandum in response to our recommendation that he issue a policy to prohibit the writing of contract modifications to change the payment terms of a contract if the change would result in illegal or otherwise improper adjustments. The memorandum instructed the military service secretaries and defense agency directors to make certain that all contracting activities have procedures in place that ensure compliance with the department’s financial management policies, which currently preclude the improper adjustments we identified in our report. It also required all contract modifications that include adjustments to closed appropriation accounts to be supported with contract file documentation sufficient to establish that the adjustments are legal and proper and received supervisory review. It further required that contract modifications involving closed accounts be approved in writing by the appropriate level comptroller or financial resource manager. DFAS Columbus officials acknowledged that the change in contract writing policies and procedures should result in fewer payment errors and adjustments. While we agree that the changes in contract writing procedures and additional policy requirements should help to reduce errors that require subsequent correcting, we found that there are still thousands of older contracts in MOCAS that have one or more closed accounts that will need to be monitored closely to ensure that illegal or otherwise improper adjustments do not occur. For example, at our request, DFAS Columbus analyzed the MOCAS database to identify contracts for which at least one of the appropriations was canceled. The results of the MOCAS inquiry showed that as of April 2002, there were 15,421 active contracts valued at $519 billion for which at least one appropriation had been canceled. DFAS officials told us that these older contracts may contain errors that will not be discovered until a contract is completed and final contract reconciliation is performed. As we have indicated, since we began our closed account work, and especially since our testimony and report on this issue in July 2001, DOD has taken actions to eliminate illegal or otherwise improper adjustments involving closed account records. As noted earlier in this report, these actions are beginning to produce positive short-term results while efforts to address the long-term problems are still ongoing. At the same time, given the severity of the existing problems and the long-term nature of DOD’s transformation efforts, you asked us to identify options the Congress could consider, including prohibiting some or all adjustments to closed accounts. We basically see two options—do nothing at this time or prohibit any adjustments immediately or shortly after an appropriation account is closed. These options are discussed in the context of our closed account work at DOD. However, options that change the account closing law would also apply to all federal agencies unless the Congress specifically limited them to DOD. One option is to take no legislative action at this time and to continue to allow DOD to adjust closed account records when appropriate to correct accounting errors. This would mean that DOD could make adjustments to closed account records when there is sufficient documentation to show that the (1) disbursement was made when the appropriation account to be charged was available to cover the disbursement, (2) agency either did not record the disbursement when it was made or charged it to the wrong appropriation account at the time, and (3) proposed adjustment will result in the disbursement being charged to the proper appropriation account. Given that DOD’s implementation of controls to identify and prevent illegal and otherwise improper adjustments seem to be having a positive effect based on 6 months of analysis, the Congress could postpone any decision to change the law in order to allow DOD additional time to monitor how its implementation of controls, policies, and procedures needed to eliminate illegal and otherwise improper closed account adjustments is working. However, given DOD’s weak overall control environment, unless DOD’s internal controls and management commitment to this problem are sustained, new ways may be developed to circumvent the controls recently put into place. Thus, there is a risk that, over time, illegal or otherwise improper closed account adjustments could reoccur. If the Congress finds in the future that DOD top management does not sustain its commitment to address its overall disbursement problems, the Congress could require a combination of oversight and reporting by DOD as to the validity of any closed account adjustments. The second option is to amend the account closing law to prohibit any adjustments to an appropriation account after it is closed. Under this option, accounting records of an appropriation account would be final when the account was closed. This option would eliminate adjustments to closed accounts as well as the substantial time and expense associated with making them. It would also provide an additional incentive for DOD to keep better records during the time the account is open since there would be no opportunity to correct the records once the account was closed. At the same time, this change would mean that known errors in accounting records could not be corrected once the account was closed and therefore accounting records would be permanently inaccurate. These inaccurate records could also affect DOD’s ability to promptly pay for goods and services. For example, assume that because of accounting errors associated with a closed appropriation account, the unspent balance of a currently available account was reduced to less than the amount needed to make a subsequent payment. If DOD could not correct the error, it would not be able to make the current payment. In another example, assume that because of accounting errors, the balance of a closed account was less than the amount needed to pay an obligation that had been charged to the closed account when it was open. Current law allows the payment to be made from current funds if the closed account balance exceeds the amount of the payment. Prohibiting all adjustments to closed accounts would make permanent the erroneously reduced balance and therefore the payment could not be made with current funds. In each of these examples, DOD would be unable to pay for the goods or services without obtaining an additional appropriation or other form of legislative relief, which could cause a hardship for the contractor. The Congress could also provide a variation of this option by allowing DOD a limited period, such as 6 months or 1 year, after an account is closed to adjust the accounting records for known errors. This option would provide for finality of records, but only after DOD has some additional opportunity to correct errors it detects immediately after the account is closed. This legislation, while not totally eliminating closed account adjustments, would provide some of the benefits discussed above while increasing the likelihood that DOD records relating to the closed account are more accurate. However, this option also presents some of the same payment and fund availability limitations discussed above. DOD has made significant improvements to its controls to identify and prevent illegal and otherwise improper closed account adjustments as evidenced by the 80 percent reduction of closed account adjustments during the first 6 months of fiscal year 2002. These short-term efforts serve as an example of what can be achieved when DOD takes prompt action to correct known problems through a strong top management commitment. At the same time, closed account adjustments are only a small fraction of the overall disbursement adjustments DOD makes each year as a result of its long-standing financial accounting and management problems. There are no quick fixes to the underlying problems, which must be dealt with over the long term. Nevertheless, there are some additional short-term actions that can be taken by focusing on simplifying accounting and contract payment allocation terms. Modernizing financial management systems, and improving the systems adherence to basic accounting requirements, will ultimately be key to DOD effectively resolving its financial management and contract payment problems. This will require a sustained commitment by DOD’s top management team over a number of years. We recommend that the Secretary of Defense direct the Under Secretary of Defense (Comptroller) to direct the Director of the Defense Finance and Accounting Service to help ensure that DFAS Columbus completes its review and correction of the remaining fiscal year 2000 illegal and otherwise improper adjustments, reverse closed account adjustments made during fiscal year 2001 identified in this report as illegal or otherwise improper, determine the entries necessary to correct the accounting for reversed fiscal year 2001 transactions, help ensure that DFAS Columbus completes the review and correction of the additional $1.1 billion of fiscal year 2001 adjustments it has scheduled for detailed review, and continue with DFAS’s top-level management attention and monitoring of the program for future adjustments to closed appropriation accounts. We also recommend that the Secretary of Defense direct the Under Secretary of Defense (Comptroller) to continue to monitor these adjustments so that any potential Antideficiency Act violations that may occur are promptly investigated and reported as required by the Antideficiency Act, 31 U.S.C. 1351, and implementing guidance. DOD agreed with our recommendations and outlined its ongoing and planned actions to identify, reverse, and correct illegal and otherwise improper fiscal year 2000 and 2001 closed appropriation account adjustments. DOD pointed out that this process may create adverse accounting conditions for a large number of contracts that will require either complete or partial reaudit to determine the correct accounting necessary to resolve the illegal or otherwise improper closed account adjustments we identified. For example, as we noted in our report, for one contract where DOD made a total of $590 million of closed account adjustments, we found that $210 million of the $590 million of adjustments were unnecessary and should not have been made because the actual disbursements had been recorded correctly. In order to reverse and correct the $210 million of unnecessary adjustments, DOD had to reverse the total $590 million in adjustments, which created other accounting errors that must now be researched and corrected. As our report noted, DOD estimates that it will take about 2,300 hours to resolve all the errors necessary to correct the $210 million of unnecessary adjustments we identified for this contract. DOD said it planned to have all its reaudits and corrective actions completed by September 30, 2004. DOD’s comments are reprinted in appendix II. We are sending copies of the report to interested congressional committees. We are also sending copies of this report to the Secretary of Defense; the Principal Deputy Under Secretary of Defense for Acquisition, Technology, and Logistics; the Secretaries of the Army, Navy, and Air Force; the Director of the Defense Finance and Accounting Service; the Secretary of the Treasury; and the Director of the Office of Management and Budget. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions regarding this report, please contact me at (202) 512-9505 or [email protected], or Larry W. Logsdon, Assistant Director, at (703) 695-7510 or [email protected]. Major contributors to this report are acknowledged in appendix III. To meet our first objective of monitoring DOD’s efforts to correct the problems we identified in our prior audit, we reviewed DFAS officials' corrective actions taken on 162 adjustments that we previously reported as $615 million of illegal or otherwise improper adjustments. As part of this review, we gathered vouchers that documented the reversal of the adjustments and analyzed financial information from DFAS Columbus’ records and reports, including contracts, contract modifications, shipping notices, invoices, payment vouchers, and schedules of adjustments. We identified and met with the DFAS Columbus officials knowledgeable about each reversed adjustment. We also identified the responsible DFAS or military service locations that maintained the official account records and obtained documentation to show how adjustments were reversed or corrected in the accounting records. To meet our second objective of determining if DOD experienced problems with adjustments to closed appropriation accounts in 2001 similar to the problems with the 2000 adjustments, we monitored DFAS Columbus’ review of $291 million of the $1.9 billion of closed account adjustments DOD reportedly made during fiscal year 2001. DFAS Columbus had already selected the $291 million of closed account adjustments for review at the time we began our audit. We took this approach rather than selecting a large number of adjustments for our own independent review because we knew that DOD had not fully implemented the controls necessary to identify and prevent fiscal year 2001 illegal and otherwise improper closed account adjustments. We reviewed the results of DFAS Columbus’ efforts and worked with staff members responsible for conducting the reviews to resolve any disagreements between DFAS and GAO on whether the documentation showed that the adjustments were legal and proper. As part of our analysis of DFAS Columbus’ reviews, we analyzed documentation supporting DFAS's detailed summaries for each adjustment to determine the reason for the adjustment and whether it was valid. For each adjustment, we reviewed the contract files for supporting hard copy documentation including modifications, invoices, payment vouchers, and MOCAS print screens. We also identified and met with the DFAS Columbus staff members who completed the reviews to discuss the reasons for the adjustments and resolve any differences of opinion between DFAS’s and our conclusions on whether the adjustments were legal and proper. To determine if DOD had implemented the effective system controls, which we identified in our prior report, to its contract reconciliation system to prevent illegal adjustments, we tested the CRS for two types of potentially illegal adjustments during a 6-month period. To do this, we independently analyzed the closed account adjustments included in the CRS database for the first 6 months of fiscal year 2002 to ascertain if CRS had processed any closed account adjustments that resulted in moving a disbursement charge (1) back to an appropriation that was canceled before the actual disbursement was made or (2) forward to an appropriation that had not yet been enacted at the time the actual disbursement was made. We met with responsible DFAS Columbus officials to discuss and resolve any transactions that our analysis identified as violations of either of these two measurements. In instances where there were violations, we met with DFAS Columbus personnel to determine why CRS controls had not prevented the transactions from processing and worked with DFAS’s staff to correct the system deficiencies. We did not validate the accuracy of the CRS database information pertaining to the disbursement dates or appropriations. To meet our third objective of determining why DOD makes so many adjustments to closed accounts, we reviewed the reconciliation summaries for the fiscal years 2000 and 2001 closed account adjustments that we reviewed in detail. We also met with the DFAS Columbus staff members who performed reconciliations to obtain their opinions on the primary reasons why errors occur. However, we did not determine the specific reasons why certain contracts have numerous ACRNs or how the detailed cost information was to be used. Finally, to determine options available to DOD and actions for the Congress to consider that would eliminate or reduce adjustments to closed appropriation accounts, we developed and presented options based on our reviews of fiscal year 2000 and 2001 closed account adjustments and discussions with DOD accounting and procurement officials. We performed our work primarily at the DFAS Center in Columbus, Ohio. We also obtained documentation from the following DFAS locations that were responsible for maintaining official accounting records: Cleveland, and Dayton, Ohio; Denver, Colorado; San Bernardino, California; and St. Louis, Missouri. Our review was conducted from June 2001 through April 2002 in accordance with U.S. generally accepted government auditing standards, except that we did not validate the accuracy of CRS information pertaining to the number of closed account adjustments and related dollar values. Staff members who made key contributions to this report were Bertram J. Berlin, Francine M. Delvecchio, Stephen P. Donahue, Dennis B. Fauber, Jeffrey A. Jacobson, Keith E. McDaniel, and Harold P. Santarelli. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
Congress changed the law governing the use of appropriation accounts in 1990 because it found that the Department of Defense (DOD) may have spent hundreds of millions of dollars for purposes that Congress had not approved. The 1990 law provided that, 5 years after the expiration of the period of availability of a fixed-term appropriation, the appropriation account be closed and all remaining balances canceled. After closing, the appropriation account could no longer be used for obligations or expenditures for any purpose. DOD has started the process of correcting the illegal or improper closed account adjustments made during fiscal year 2000. However, this will require substantial effort and, according to DOD, estimates will not be complete before the end of fiscal year 2002. DOD had upgraded its system control features by the end of fiscal year 2001 to preclude many of the wholesale adjustments that GAO had previously identified. Because its system enhancements were done in stages, including some near the end of fiscal year 2001, DOD continued to make large amounts of illegal and otherwise improper closed account adjustments during the year. However, given the intensity of staff efforts to address these issues, it did not expect to complete the correct accounting for transactions found to be in error until September 2004. A lack of fundamental controls and management oversight over the closed accounts was the primary reason DOD was making so many closed account adjustments. DOD's action to resolve its problems with closed account adjustments is beginning to produce positive short-term results. However, if DOD fails to sustain these positive results, Congress could require DOD to validate and report to the Congress all closed account adjustments.
“Wetland” is a generic term used to describe a variety of wet habitats. In general, wetlands are characterized by the frequent or prolonged presence of water at or near the soil surface, soils that form under flooded or saturated conditions (hydric soils), and plants that are adapted to life in these types of soils (hydrophytes). The United States contains many different types of wetlands, from swamps in Florida to peatlands in northern Minnesota to tidal salt marshes in Louisiana. Figures 1 and 2, respectively, show two types of wetlands found in the United States—coastal salt marsh wetlands commonly found along the East and Gulf coasts and prairie pothole wetlands commonly found in the plains of the North Central United States. Wetlands were once regarded as unimportant areas to be filled or drained for agricultural or development activities. However, wetlands are now recognized for a variety of important functions that they perform, including providing vital habitat for wildlife and waterfowl, including about half of the threatened and endangered species; providing spawning grounds for commercially and recreationally valuable providing flood control by slowing down and absorbing excess water during storms; maintaining water quality by filtering out pollutants before they enter streams, lakes, and oceans; and protecting coastal and upland areas from erosion. Over 25 federal statutes have been enacted relating to wetlands. These laws have resulted in the (1) regulation of activities undertaken in areas designated as wetlands; (2) acquisition of wetlands through purchase or protective easements that prevent certain activities, such as draining and filling; (3) restoration of damaged wetlands or the creation of new wetlands; and (4) disincentives to altering wetlands or incentives to protect them in their natural states. (App. I contains a brief discussion of the principal wetlands-related statutes.) Despite the passage of numerous laws and the issuance of two presidential executive orders protecting wetlands, no specific or consistent goal for the nation’s wetlands-related efforts existed until 1989. On February 9, 1989, President Bush, in response to recommendations made by the National Wetlands Policy Forum, established the national goal of no net loss of wetlands. The current administration has also supported wetlands protection. In its wetlands plan, issued in August 1993, the administration included an interim goal of no overall net loss of the nation’s remaining wetlands and a long-term goal of increasing the quality and quantity of the nation’s wetlands. In its Clean Water Action Plan, issued on February 19, 1998, the administration included a strategy to achieve a net gain of up to 100,000 acres of wetlands each year, beginning in the year 2005. At least thirty-six federal agencies, to varying degrees, conducted wetlands-related activities during fiscal years 1990 through 1997. The activities conducted by these agencies included acquiring, regulating, restoring, enhancing, mapping, inventorying, delineating, and conducting research relating to wetlands. Six agencies—the Army Corps of Engineers, the Department of Agriculture’s Farm Service Agency and the Natural Resources Conservation Service (NRCS), the Department of Commerce’s National Oceanic and Atmospheric Administration, the Department of the Interior’s Fish and Wildlife Service (FWS), and the Environmental Protection Agency—are the primary agencies involved in and responsible for implementing wetlands-related programs. The involvement of the 30 other agencies was generally limited to (1) general monitoring or stewardship roles or (2) the avoidance and mitigation of potential impacts to wetlands from their own projects and activities. As figures 3 and 4 show, the six primary agencies accounted for most of the funding and full-time-equivalent staff-years associated each year with the wetlands-related activities of federal agencies. Fiscal year Appendix II provides a brief description of some of the principal wetlands-related activities of each agency, and appendix III contains detailed information on the funding and full-time-equivalent staff-years associated with the agencies’ activities during fiscal years 1990 through 1997. The consistency and reliability of wetlands acreage data reported by federal agencies are questionable. Although both the Department of the Interior’s FWS and the Department of Agriculture’s NRCS maintain inventories that produce estimates of the nation’s remaining wetlands acreage and rates of wetlands gains and losses, the two inventories’ estimates are not completely consistent. In addition, the current reporting practices of the agencies do not allow the wetlands-related accomplishments of the agencies to be determined. These reporting practices include a lack of consistency in the use of terms, the inclusion of nonwetlands acreage in wetlands project totals, and the double counting of accomplishments. Despite the efforts of several interagency groups to address problems with wetlands data, the problems persist. In its Clean Water Action Plan, the administration recently announced new plans to improve wetlands data. As called for by the administration, the Interagency Wetlands Working Group has developed an action plan to guide its efforts to produce a single wetlands status and trends report. However, as of June 10, 1998, strategies had not yet been developed to address the other actions planned by the administration to improve wetlands data. No single set of numbers representing the nation’s remaining wetlands acreage and annual gains and losses is available. Estimates made by two federal resource inventories, the National Wetlands Inventory and the National Resources Inventory, maintained by the FWS and NRCS, respectively, are not completely consistent. The National Wetlands Inventory, established to generate information on the characteristics, extent, and status of the nation’s wetlands and deepwater habitat, is to provide an update of the status and trends of the nation’s wetlands at 10-year intervals. The broader National Resources Inventory is an inventory of land cover and use, soil erosion, prime farmland, wetlands, and other natural resource characteristics on nonfederal lands in the United States. It provides a record of the nation’s conservation accomplishments and future program needs. The National Resources Inventory has been conducted at 5-year intervals to determine the conditions and trends in the use of soil, water, and related resources nationwide and statewide. However, it is now making the transition to an annualized inventory process. Each inventory uses the wetlands data it collects to produce estimates of the nation’s remaining wetlands acreage and the rate of wetlands gains and losses. The estimates made by each inventory are based on sampling. However, the two inventories use different sampling techniques and their estimates cover different time periods. The inventories also have used different land-cover/land-use classifications categories for the causes of wetlands losses. Although both reported that the rate of wetlands loss has declined, as shown in table 1, the estimates produced by FWS’ National Wetlands Inventory and NRCS’ National Resources Inventory are not completely consistent. As the table shows, the two inventories differ, sometimes substantially, in their estimates. Although the two inventories’ estimates of the nation’s total remaining wetlands acreage varied only about 10 percent, their estimates in other categories varied more significantly. For example, FWS reported that agricultural activities were responsible for the loss of over 1.4 million acres of wetlands—more than 4 times the loss attributed to agriculture by NRCS. NRCS, on the other hand, estimated that development was responsible for the loss of 886,000 acres—about 11 times the number of acres that FWS reported. Questions have been raised about the validity of the wetlands acreage estimates made by both inventories. Officials from each of the agencies responsible for the inventories have questioned the estimates made by the other, and officials from the Environmental Protection Agency (EPA) have expressed concern about the estimates of both inventories. The issues raised by officials of the two inventories and EPA include the adequacy of quality control of the data and of quality assurance procedures, the dates of the aerial photography used, and the methods used to develop the estimates. The agencies use such terms as protection, restoration, rehabilitation, improvement, enhancement, and creation in describing and reporting their wetlands-related activities and the resulting accomplishments. However, federal agencies are not consistent in the use of these terms. Even when the same terms are used, the agencies do not define them in the same way. For example, depending upon the agency, the term “restoration” has different meanings and different results. At Interior’s FWS, restoration is considered the reestablishment of a degraded wetland to its former state and therefore generally would not result in a net gain of wetlands acres. At Agriculture’s NRCS, restoration is considered to be the reestablishment of a wetland where it previously existed and would result in a gain of wetlands acres. The agencies also often include nonwetlands acreage when reporting their accomplishments. Nonwetlands acreage, such as adjacent uplands, are often included in wetlands project totals but are not identified or listed separately. For example, the NRCS’ Wetlands Reserve Program, which acquires wetlands easements from landowners and shares in the cost of restoration, might report that a wetlands restoration project restored a total of 25 acres. However, the 25 acres reported might include not only 10 acres of former wetlands that were restored but also 10 acres of existing degraded wetlands whose functions were enhanced and 5 acres of adjacent uplands. The FWS’ North American Wetlands Conservation Program also includes nonwetlands acreage in its project totals. Program officials estimate that about 75 percent of the acreage reported by habitat restoration projects are uplands. Adding to these reporting problems is the double counting of accomplishments. Federal and state agencies and private conservation organizations are often involved in joint projects. When each reports the accomplishments resulting from these joint projects, the actual accomplishment is overstated. For example, the total number of wetlands acres that FWS’ North American Waterfowl Management Plan reports as restored includes the results of activities that involve FWS and other federal agencies, such as the Forest Service and the NRCS, and other state and private conservation organizations. These agencies would also report the acreage restored as the result of these joint projects. Since 1989, at least five interagency groups, established to better coordinate federal wetlands programs, have attempted to improve wetlands data. These task forces and the purposes for which they were established are described below. Inter-Agency Task Force on Wetlands. On May 23, 1989, the White House established an Inter-Agency Task Force on Wetlands under the Domestic Policy Council’s Working Group on Environment, Energy, and Natural Resources to examine ways to achieve no net loss of wetlands as a national goal. The task force’s objectives included (1) providing clear direction to federal agencies for strengthening, implementing, and enforcing wetlands protection, maintenance, and restoration; (2) coordinating agencies’ involvement in achieving the no net loss goal; and (3) assessing implementation of the no net loss goal by federal, state, and local governments to determine what further steps might be necessary. Wetland Inventory Subgroup of the Domestic Policy Council ’s Interagency Wetlands Task Force. The Domestic Policy Council’s Interagency Wetlands Task Force established a Wetland Inventory Subgroup in October 1990. Three agencies—Interior’s FWS; Agriculture’s then-Soil Conservation Service, and Commerce’s National Oceanic and Atmospheric Administration—agreed to cochair the Wetland Inventory Subgroup. The subgroup was charged to evaluate existing inventory programs as well as propose potential improvements. Interagency Federal Lands Wetlands Restoration and Creation Committee. As a result of President Bush’s comprehensive plan for improving the protection of the nation’s wetlands, in August 1991 the Interagency Federal Lands Wetlands Restoration and Creation Committee was established. This committee was charged with coordinating federal restoration and creation projects and with establishing criteria and recommendations for redirecting agencies’ future spending in restoring and creating wetlands. Wetlands Ad Hoc Integration Working Group of the Federal Geographic Data Committee’s Wetlands Subcommittee. The Wetlands Ad Hoc Integration Working Group was established in June 1992 at the request of the White House’s Domestic Policy Council to attempt to integrate and reconcile the National Wetlands Inventory’s Status and Trends and the National Resources Inventory’s reports on the amount of wetlands lost and remaining. Interagency Working Group on Federal Wetlands Policy (White House Wetlands Working Group). In June 1993, this working group was formed to address concerns about federal wetlands policy. The working group established five principles for federal wetlands policy that served as the framework for the development of the administration’s package of wetlands reform initiatives. The principles included (1) support for the interim goal of no overall net loss of the nation’s remaining wetlands and the long-term goal of increasing the quality and quantity of the nation’s wetlands, (2) reduction of the federal government’s reliance on the regulatory program as the primary means to protect wetlands resources and accomplish long-term wetlands gains by emphasizing nonregulatory programs, and (3) basing federal wetlands policy on the best scientific information available. With the exception of the last one, these working groups have been disbanded. However, because either the members of these interagency groups could not agree on the actions needed or the adoption of their recommendations was left to the individual agencies, the problems with the consistency and reliability of wetlands acreage data persist. Recognizing that these problems still exist, the administration recently announced new plans to improve wetlands data. In October 1997, the Wetlands Subcommittee of the Federal Geographic Data Committee decided to develop consistent definitions for wetlands gains, losses, and modifications for use by all federal agencies. In addition, the Wetlands Subcommittee proposed the development of a reporting system that would standardize reporting procedures and provide for a mechanism to collect and compile data on the agencies’ accomplishments. Additional efforts to improve wetlands data were included in the administration’s Clean Water Action Plan, issued on February 19, 1998. These actions include the following: Complete a plan to use existing inventory and data collection systems to support a single status and trends report by the year 2000 and convene a peer review panel to evaluate, by June 1998, a plan to track annual changes in the nation’s wetlands of less than 100,000 acres. Issue technical guidance, by October 1999, on the restoration, enhancement, and creation of wetlands functions. Establish an interagency tracking system, by October 1999, that will more accurately account for wetlands losses, restoration, creation, and enhancement. This system will also establish accurate baseline data for federal programs that contribute to net wetlands gains. The administration’s efforts to implement the actions contained in the Clean Water Action Plan are under way. In May 1998, the Interagency Wetlands Working Group issued an action plan developed to guide its efforts to produce a single wetlands status and trends report in the year 2000. However, much remains to be done before such a report can be produced. Many of these actions outlined in the plan will not occur for more than a year and are dependent upon the successful completion of other steps contained in the plan. For example, the plan calls for a three-stage quality assurance process to be developed to ensure that the National Resources Inventory’s l997 data meet FWS’ standards and needs for the year-2000 status and trends report to the Congress. Stage 3, which involves the review and analysis of preliminary estimates, is not scheduled to be completed until April 1999 and is dependent upon the successful completion of stages 1 and 2. In addition, a long-term commitment and considerable time and effort from the agencies involved will be required to successfully implement the plan. (A copy of this plan can be found in app. VII.) Details of the administration’s plans to accomplish the other actions, such as establishing an interagency tracking system and issuing technical guidance on the restoration, enhancement, and creation of wetlands functions, have not yet been drafted. However, according to the Chairman of the Interagency Wetlands Working Group, the actions planned by the Wetlands Subcommittee to develop consistent definitions and reporting standards for use in reporting the wetlands-related accomplishments of federal agencies will be folded into the administration’s efforts. Over $500 million each year is associated with the efforts of federal agencies to protect and restore wetlands. However, the consistency and reliability of the estimates made of the nation’s remaining wetlands acreage and the data reported by the agencies on their accomplishments are questionable. Despite the efforts of five interagency task forces established since 1989 to resolve them, these problems persist. As a result, the progress made toward achieving the goal of no net loss of the nation’s remaining wetlands, the administration’s new goal of gaining 100,000 acres of wetlands each year beginning in the year 2005, or the contributions made by the agencies in achieving these goals cannot be measured. In its recently issued Clean Water Action Plan, the administration announced new efforts to improve the wetlands acreage data reported by federal agencies. Although a plan has been developed to accomplish one of the actions—producing a single wetlands status and trends report—much remains to be done before such a report can be issued. Many of the steps outlined in the plan are not scheduled to occur for more than a year and are dependent on the successful completion of other steps contained in the plan. Furthermore, a long-term commitment and considerable time and effort from the agencies are crucial to the successful implementation of this effort. In addition, details of how the other actions announced by the administration will be achieved have not been developed. Unless strategies are developed and implemented for all of the wetlands-related actions contained in the administration’s Clean Water Action Plan, the latest attempts to improve wetlands data will likely be no more successful than previous ones. Without consistent and reliable wetlands acreage data, decisionmakers (the Congress and the administration) will be hampered in their ability to make sound decisions about necessary adjustments to federal wetlands policies and programs that would allow the nation’s wetlands goals to be achieved. To ensure that the consistency and reliability of wetlands acreage data are improved, we recommend that the Secretary of the Department of Agriculture and the Secretary of the Department of the Interior, in consultation with the Chairman of the White House’s Interagency Wetlands Working Group, develop and implement a strategy for ensuring that all actions contained in the Clean Water Action Plan relating to wetlands data are adopted governmentwide. Such actions should include, in addition to the ongoing effort to develop a single set of accurate, reliable figures on the status and trends of the nation’s wetlands, the development of consistent, understandable definitions and reporting standards that are used by all federal agencies in reporting their wetlands-related activities and the changes to wetlands that result from such activities. We provided a draft of this report to the Army Corps of Engineers; the departments of Agriculture, Commerce, and the Interior; EPA; and the Chair of the Interagency Wetlands Working Group for review and comment. The Interagency Wetlands Working Group consolidated the comments of the principal agencies involved in wetlands, except for Commerce, and included the agencies’ input to the working group in its response. The agencies commented on a variety of issues. For example, the Working Group and EPA expressed concern that our report did not adequately clarify the level and nature of the involvement of federal agencies in wetlands-related activities. In addition, the Working Group and EPA believe that we emphasized the differences in the wetlands acreage estimates produced by FWS and NRCS without adequately noting that both inventories generally agree that the rate of wetlands losses has declined and that efforts are under way to reconcile the differences. We believe that our report accurately characterizes both the level and nature of the involvement of federal agencies in wetlands-related activities and the differences in the estimates produced by the two inventories. We point out in the report that while 36 federal agencies are involved in wetlands-related activities, 6 of these agencies account for the majority of funding and staffing associated with such activities. We have added a statement to the report to further clarify the roles of the various federal agencies in wetlands-related activities. In addition, while we report that the wetlands acreage estimates produced by FWS and NRCS are not completely consistent, we also acknowledge that both have reported that the rate of wetlands losses has declined. Furthermore, we have included a discussion of the efforts recently undertaken by the Interagency Wetlands Working Group to produce a single wetlands status and trends report. With the exception of the Department of the Interior, which agreed with our recommendation, neither the Working Group nor the other principal agencies specifically commented on our recommendation. The Chair of the Working Group enclosed a copy of the action plan recently developed by the group to guide its efforts to produce a single status and trends report. We revised our report to include a discussion of the Working Group’s plan. This plan, which appears in appendix VII, was referred to by several agencies in their individual comments. A more complete discussion of the comments provided by the Working Group and the agencies and our evaluation of their comments are contained in appendixes IV, V, and VI. We performed our work from July 1997 through June 1998 in accordance with generally accepted government auditing standards. A complete discussion of our objectives, scope, and methodology appears in appendix VIII. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 15 days after the date of this letter. At that time, we will send copies to the Secretaries of Agriculture, Commerce, Defense, and the Interior; the Administrator of the Environmental Protection Agency; the Chair of the Interagency Wetlands Working Group; and other interested parties. We will also make copies available to others upon request. Please call me at (202) 512-3841 if you or your staff have any questions about this report. Major contributors to this report are listed in appendix IX. Executive Order 11990, Protection of Wetlands, May 24, 1977. Executive Order 11990, as amended, directs each federal agency to take action to minimize the destruction of wetlands and to preserve and enhance the benefits of wetlands in carrying out certain responsibilities. These responsibilities include (1) acquiring, managing, and disposing of federal lands and facilities; (2) providing federally financed or assisted construction; and (3) conducting federal activities and programs affecting land use, including water and related land resources planning, regulating, and licensing activities. Each agency shall also, to the extent permitted by law, avoid undertaking or providing assistance for new construction located in wetlands unless (1) there is no practical alternative and (2) all practical measures to minimize harm to wetlands are included. Executive Order 11988, Floodplain Management, May 24, 1977. Executive Order 11988, as amended, directs each federal agency to take action to reduce the risk of flood loss, to minimize the impact of floods on human safety, health, and welfare, and to restore and preserve the natural and beneficial values served by floodplains in carrying out certain responsibilities. These responsibilities include (1) acquiring, managing, and disposing of federal lands and facilities; (2) providing federally financed or assisted construction; and (3) conducting federal activities and programs affecting land use including water and related land resources planning, regulating, and license activities. Each federal agency must evaluate the potential effects of any actions it may take in a floodplain. If an agency proposes to conduct, support, or allow an action in a floodplain, it shall consider alternatives to avoid adverse effects and incompatible development in the floodplains. The Coastal Barriers Resources Act (16 U.S.C. 3501 et seq.). The Coastal Barriers Resources Act, as amended by the Coastal Barrier Improvement Act, prohibits most new federal expenditures and financial assistance for development of coastal barriers, included in the Coastal Barrier Resources System, a major portion of which is wetlands. The purpose of the act is to minimize the loss of human life, wasteful expenditure of federal revenues, and damage to fish, wildlife, and other natural resources associated with the development of coastal barriers. The Coastal Wetlands Planning, Protection and Restoration Act (16 U.S.C. 3951 et seq.). This 1990 act authorizes spending for coastal wetlands conservation and restoration projects and designated 18 percent of the total amount in the Sport Fish Restoration Account for these projects. The act created a task force, composed of the Secretary of the Army, the Administrator of the Environmental Protection Agency, the Governor of Louisiana, and the Secretaries of Agriculture, Commerce, and the Interior, to develop a comprehensive approach for protecting and restoring coastal wetlands in Louisiana. Seventy percent of the revenues go to restoring Louisiana’s coastal wetlands. The act also contains a provision for the remaining 30 percent of the revenues. The act created two coastal wetlands cost-sharing programs open to all states. Fifteen percent is for the National Coastal Wetlands Conservation Grant Program for coastal habitat projects in all coastal states, except Louisiana. The remaining 15 percent is for coastal-only North American Waterfowl Management Plan projects that are approved under the North American Wetlands Conservation Act Grant program. These two programs are administered by FWS. The Coastal Zone Act Reauthorization Amendments of 1990 (16 U.S.C. 1451 et seq.). Under the Coastal Zone Act Reauthorization Amendments of 1990 (subtitle C of the Omnibus Budget Reconciliation Act of 1990), the Secretary of Commerce sets guidelines and provides funding for states to carry out coastal zone management programs. The term “coastal zone” includes wetlands. For states without a coastal zone management program, the act provides funding to develop such a program. It also provides coastal zone enhancement grants to coastal states to improve (1) coastal wetlands protection, (2) natural hazards management, (3) public beach access, (4) marine debris management, (5) assessments of coastal growth and development, and (6) environmentally sound siting of coastal energy facilities. The act amended the Coastal Zone Management Act of 1972. The Emergency Wetlands Resources Act of 1986 (16 U.S.C. 3901 et seq.). This act promotes the conservation of wetlands in order to maintain the public benefits they provide. The purpose is to intensify cooperation and acquisition efforts among private interest and local, state, and federal governments for the protection, management, and conservation of wetlands. The act authorized the acquisition of wetlands consistent with a National Wetlands Priority Conservation Plan. It also (1) contains options for generating revenues to acquire and protect wetlands, (2) requires that statewide comprehensive outdoor recreation plans specifically address wetlands, (3) directs the completion of the map inventory of the nation’s wetlands and the production, by September 30, 2004, of a digital wetlands database, and (4) requires a study of the impacts of federal programs on wetlands. The act raised the price of duck stamps, required entrance fees at selected units of the national wildlife refuge system, and required that an amount equal to the annual duties on imported firearms and ammunition be paid into the Migratory Bird Conservation Fund. The act requires FWS to complete its wetlands inventory mapping of the contiguous United States by 1998. The Endangered Species Act (16 U.S.C. 1531 et seq.). This act prohibits any federal agency from undertaking or funding a project that will threaten a rare or endangered species. Some wetlands development is restricted by this statute. The act can be used to prevent alterations of wetlands necessary to maintain a species’ critical habitat— i.e., the geographical area that has the physical or biological features essential to conserve the species and that may require special management consideration or protection. The Everglades National Park Protection and Expansion Act of 1989 (16 U.S.C. 410r-5 et seq.). This act provides for the acquisition of 107,600 acres to be added to Everglades National Park in southern Florida and provides for an increase in the water flow to the park to help restore and protect its water-dependent ecosystem. The additional acres would expand the size of the park to 1.5 million acres. The Federal Agricultural Improvement and Reform Act of 1996 (1996 Farm Bill). Established by section 334 of the 1996 Farm Bill, the Environmental Quality Incentives Program (EQIP) (58 U.S.C. 3839aa et seq.) combines four Department of Agriculture conservation programs: the Agricultural Conservation Program, the Water Quality Incentives Program, the Great Plains Conservation Program, and the Colorado River Basin Salinity Program. EQIP offers farmers and ranchers 5-year to-10-year contracts that may make up to 100 percent incentive payments and provide up to 75 percent cost-sharing for conservation practices to combat serious threats to soil, water, and related natural resources, including wetlands. The Congress authorized $200 million annually for EQIP through fiscal year 2002 to be paid by the Commodity Credit Corporation. The Federal Aid to Wildlife Restoration Act of 1937 (16 U.S.C. 669 et seq.). The purpose of this act is to provide assistance to the states and territories in carrying out projects to restore, enhance, and manage wildlife resources and habitat. The Federal Water Pollution Control Act (“Clean Water Act”). Section 404 of the Clean Water Act (33 U.S.C. 1344) provides the principal federal authority to regulate the discharge of dredged and fill material to waters of the United States, including wetlands. Under section 404, landowners and developers must obtain permits to carry out dredging and fill activities in navigable waters, which include wetlands. This act specifically exempts certain activities—normal agriculture, silviculture (forestry), and ranching—provided they do not convert areas of U.S. waters to uses to which they were not previously subject and do not impair the flow or circulation of such waters or reduce their reach. Section 402 (33 U.S.C. 1342) authorizes a national system for regulating sources of water pollution, which can affect wetlands, either by the Environmental Protection Agency or through approved state programs. The Clean Water Act prohibits pollution discharges without a permit. Pollutant discharges are allowed subject to statutory restrictions under this section. The Fish and Wildlife Act of 1956 (16 U.S.C. 742 et seq.). This act established the Fish and Wildlife Service and authorized the Secretary of the Interior to take such steps as required for the development, advancement, management, conservation, and protection of fish and wildlife resources. Such authority can be used to protect wetlands vital to many fish and wildlife species. The Fish and Wildlife Coordination Act (16 U.S.C. 661 et seq.). This act requires that wildlife conservation be given consideration equal to that given other purposes of water resources development projects constructed by federal agencies. This act empowers FWS and the Department of Commerce’s National Marine Fisheries Service to evaluate the impact on fish and wildlife of all new federal projects and federally permitted projects, including projects granted under section 404 of the Clean Water Act. The Food, Agriculture, Conservation, and Trade Act of 1990 (1990 Farm Bill). Established by the 1990 Farm Bill, the Wetlands Reserve Program (WRP) (16 U.S.C. 3837 et seq.) is a voluntary program to restore and protect wetlands. Under WRP, farmers can apply to enroll prior converted wetlands, degraded wetlands, and buffer areas under permanent easement. Landowners are paid up to the agricultural value of the land for granting the government a permanent easement or 75 percent of this value for 30-year easements. They may also receive a large part of the costs to carry out conservation measures and to protect wetlands functions on lands subject to an easement. The 1990 Farm Bill established the Farmers Home Administration’s Conservation Easement Program (7 U.S.C. 1985), under which lands that either have reverted or may revert to the Department of Agriculture’s Farmers Home Administration can be preserved or restored, for example, wetlands. The Secretary of Agriculture may grant or transfer easements on land obtained from farm foreclosures or voluntary conveyance to federal or state agencies. The Prohibition on Loans to Fill Wetlands (7 U.S.C. 2006(e)) provision of the 1990 Farm Bill prohibits the Secretary of Agriculture from approving any loan to drain, fill, level, or otherwise manipulate a wetland. The Food Security Act of 1985 (1985 Farm Bill). Section 404 of the Clean Water Act does not regulate activities such as drainage, ditching, and channelization for agricultural production, which are major causes of past losses of wetlands. To fill this gap in coverage, the Food Security Act of 1985 was authorized to reduce the amount of wetland conversion directly related to agricultural production and included two major wetlands-related provisions, the Swampbuster and the Conservation Reserve Program. The Swampbuster Provision (16 U.S.C. 3821 et seq.) denies federal farm program benefits to farmers who produce a commodity crop on a converted wetlands. Wetlands which were converted for agricultural purposes prior to the passage of this act are exempt from this provision. Landowners who want to convert wetlands may offset losses of wetlands through mitigation efforts, including enhancing, restoring or creating wetlands. Farmers can regain federal benefits if they restore converted wetlands. The Secretary of Agriculture has discretion to determine for which program benefits violators are ineligible and to provide good faith exemptions. The Conservation Reserve Program (16 U.S.C. 3831 et seq.) is a voluntary program offering annual rental payments to farmers to protect highly erodible and environmentally sensitive lands, including wetlands, with grass, trees, and other long-term cover. The 1996 Farm Bill extended the program until fiscal year 2002, capped overall enrollment at 36.4 million acres, and provided funding through the Commodity Credit Corporation. Annual rental payments are based on the agricultural rental value of the land and can cover up to 50 percent of a participant’s costs. Participants may also receive an additional 25 percent of their costs for the restoration of wetlands. The 1990 and 1996 Farm Bills modified these programs. The Great Lakes Fish and Wildlife Restoration Act of 1990 (16 U.S.C. 941 et seq.). Concerned about the damage done to the Great Lakes Basin and its fish and wildlife resources, including the loss of 80 percent of the wetlands in the basin, the Congress passed legislation to address this problem. The Great Lakes Fish and Wildlife Restoration Act directs the Director of the Fish and Wildlife Service to seek to achieve several goals in administering the agency’s programs. One of these goals is protecting, maintaining, and restoring fish and wildlife habitat, including the enhancement and creation of wetlands. Sections 1006(d) and 1007(a) Intermodal Surface Transportation Efficiency Act of 1991 (33 U.S.C. 103 (i)(11) and 133 (b)(11)). The Intermodal Surface Transportation Efficiency Act of 1991 provides that federal funds apportioned to a state for the National Highway System and the surface transportation program may be used for wetlands mitigation efforts related to projects funded under these programs. These mitigation efforts may include participation in wetlands mitigation banks, contributions to statewide and regional efforts to restore, conserve, enhance and create wetlands, and the development of statewide and regional wetlands conservation and mitigation plans. Contributions toward these efforts, subject to certain conditions, may take place before a project is constructed. The Land and Water Conservation Act of 1965 (16 U.S.C. 460l et seq.). This act supports the purchase of natural areas, including wetlands, at federal and state levels. The Emergency Wetlands Resources Act of 1986 amended the Land and Water Conservation Act to (1) permit the funds to be used to acquire wetlands and (2) require the states to include the acquisition of wetlands as part of their comprehensive outdoor recreation plans. The Magnuson-Stevens Act (16 U.S.C. 1801 et seq.). Amended on October 11, 1996 by the Sustainable Fisheries Act, the Magnuson-Stevens Act calls for direct action to stop or reverse the continued loss of fish habitat. The act requires cooperation among the National Marine Fisheries Service, eight Regional Fishery Management Councils, fishing participants, federal and state agencies, and others in achieving the essential habitat goals of habitat protection, conservation and enhancement. The Migratory Bird Conservation Act (16 U.S.C. 715 et seq.). This act established a Migratory Bird Conservation Commission to approve areas recommended by the Secretary of the Interior for acquisition with Migratory Bird Conservation Funds. The Commission also approves wetlands conservation projects recommended by the North American Wetlands Conservation Council under the North American Wetlands Conservation Act. The Migratory Bird Hunting and Conservation Stamp Act (16 U.S.C. 718 et seq.). Passed in 1934, this act requires waterfowl hunters aged 16 and older to purchase “duck stamps,” the proceeds of which are deposited into the Migratory Bird Conservation Fund to be used to acquire small wetland and pothole areas and rights-of-way providing access to such areas. The National Environmental Policy Act of 1969 (42 U.S.C. 4321 et seq.). This act provides that environmental impact statements be prepared for major federal actions. The statements must include assessments of the environmental impacts of proposed actions, any adverse environmental effects that cannot be avoided should the proposals be implemented, and alternatives to the proposed actions. Assessments under this act have been applied to major federal actions affecting wetlands. The National Flood Insurance Act of 1968 (42 U.S.C. 4001 et seq.). This act requires communities to develop federally approved floodplain management programs. Administered by the Federal Emergency Management Agency, the act provides subsidized flood insurance to property owners in communities with approved programs. Communities that do not adopt an approved program to regulate future floodplain uses are ineligible for most federal financial assistance, including federal disaster assistance in case of flood. Property owners whose land is in a floodplain cannot get federally guaranteed mortgages, loans, or other forms of financial assistance unless the property is covered by flood insurance. In general, the programs apply to structures in floodplains. Although not the act’s primary focus, wetlands development is covered in the programs, since nearly all coastal and most inland wetlands occur in floodplains. The National Wildlife Refuge System Administration Act of 1966 (16 U.S.C. 668dd et seq.). This act established a National Wildlife Refuge System by combining former “wildlife refuges, areas for the protection and conservation of fish and wildlife threatened with extinction, wildlife ranges, game ranges, wildlife management areas, and waterfowl production areas” into a single refuge system. The system currently includes 513 national wildlife refuges. FWS estimates that about 33 percent is wetlands. The North American Wetlands Conservation Act of 1989 (16 U.S.C. 4401 et seq.). The act encourages voluntary public-private partnerships to conserve North American wetlands ecosystems and wetland-dependent migratory birds in support of the North American Waterfowl Management Plan in an effort to increase waterfowl populations. The act authorizes the Congress to appropriate up to $30 million annually for its implementation. The act is financed, in part, by funds received from the investment of unobligated Federal Aid to Wildlife Restoration Act funds, which are derived from excise taxes on ammunition and sporting arms, handguns, and certain archery equipment, as well as fines, penalties, and forfeitures associated with Migratory Bird Treaty Act violations. Between 50 and 70 percent of the available funds are to be spent on wetlands conservation projects in Canada and Mexico; the remaining funds are to be spent on projects in the United States. Projects are recommended to the Migratory Bird Conservation Commission for funding, and costs are shared with state and private organizations working toward the goal of wetlands preservation. The Resources Conservation and Recovery Act of 1976 (42 U.S.C. 1342 et seq.). This act, which is administered by the Environmental Protection Agency, controls the disposal of hazardous waste and could reduce the threat of chemical contamination of wetlands. The Rivers and Harbors Act of 1899 (33 U.S.C. 403). Section 10 of this act requires that permits be obtained from the Army Corps of Engineers for dredge, fill, and other activities that could obstruct navigable waterways, which can include wetlands. The Water Bank Act (16 U.S.C. 1301 et seq.). Passed in 1970, this act authorized the Water Bank Program to provide funds to purchase 10-year easements on wetlands and adjacent areas. The act’s objectives were to preserve, restore, and improve the wetlands of the nation and thereby (1) conserve surface waters, (2) preserve and improve migratory waterfowl and other wildlife resources, (3) reduce runoff and soil and wind erosion, (4) contribute to flood control, (5) contribute to improved water quality and reduced stream sedimentation, (6) contribute to improved subsurface moisture, (7) reduce the number of new acres coming into production and retire lands now in production, (8) enhance the natural beauty of the landscape, and (9) promote comprehensive and total water management planning. Under the act, private landowners or operators enter into agreements with the federal government in which they promise not to drain, fill, level, burn, or otherwise destroy wetlands and to maintain ground cover essential for the resting, breeding, or feeding of migratory birds. In exchange, the landowners or operators receive annual payments. The Watershed Protection and Flood Protection Act (16 U.S.C. 1003a). The Secretary of Agriculture may provide cost share assistance to enable project sponsors, often local flood control districts, to acquire perpetual wetland or floodplain conservation easements. The easements would perpetuate, restore, and enhance the natural capacity of wetlands and floodplains to retain excessive floodwater, improve water quality and quantity, and provide habitat for fish and wildlife. Project sponsors must provide up to 50 percent of the cost for acquiring such easements. The Water Resources Development Act of 1986 (33 U.S.C. 2294). Section 1135 of this act authorized the Secretary of the Army to review water resources projects constructed by the Corps to determine the need for modifications that would improve the quality of the environment. Projects that address the degradation of the quality of the environment caused by the Corps may also be undertaken. Nonfederal parties must agree to provide 25 percent of a project’s cost and usually 100 percent of the operation, maintenance, replacement, and rehabilitation costs. Up to 80 percent of the nonfederal share may be provided as work-in-kind. The Water Resources Development Act of 1990 (33 U.S.C. 2317). Section 307 of this act includes, as part of the Army Corps of Engineers’ water resources development program, (1) an interim goal of no overall net loss of the nation’s remaining wetlands base and (2) a long-term goal to increase the quality and quantity of the nation’s wetlands. The act also requires the Secretary of the Army to develop—in consultation with the Environmental Protection Agency, the Department of the Interior’s Fish and Wildlife Service, and other appropriate agencies—a wetlands action plan to achieve the goal of no net loss of remaining wetlands. This action plan, to be completed by November 28, 1991, was never published. The act also authorized the Secretary of the Army to establish and implement a demonstration program for the purposes of determining and evaluating the technical and scientific long-term feasibility of wetlands restoration, enhancement, and creation as a means of contributing to these goals. The Water Resources Development Act of 1992 (33 U.S.C. 2326). Section 204 of the act authorized the Secretary of the Army to carry out projects for the protection, restoration, and creation of aquatic and ecologically related habitat, including wetlands, in connection with dredging for a navigation project. Nonfederal parties must agree to provide 25 percent of a project’s construction cost and pay 100 percent of the operation, maintenance, replacement, and rehabilitation costs of the project. The Water Resources Development Act of 1996 (33 U.S.C. 2330). Section 206 of this act authorized the Secretary of the Army to carry out aquatic ecosystem restoration projects that will improve the quality of the environment, are in the public interest, and are cost-effective. Individual projects are limited to $5 million in federal cost. Nonfederal parties must contribute 35 percent of the cost of construction and 100 percent of the cost of operation, maintenance, replacement, and rehabilitation. At least 36 federal agencies are, to varying degrees, involved in wetlands-related activities. This appendix briefly describes some of the principal wetlands-related programs or activities of these agencies. The Department of Agriculture (USDA) has a number of programs designed to promote wetlands protection. Some of these, such as the “Swampbuster” provision and the Conservation Reserve Program were included in the 1985 Farm Bill and later modified in the 1990 and 1996 Farm Bills. The Wetlands Reserve Program, also included in the 1990 Farm Bill, represents one of Agriculture’s major programs to restore wetlands. These incentive-based conservation programs were established to restore and protect wetlands and to minimize the detrimental impacts to those wetlands already converted. Although other Agriculture agencies are also involved in wetlands-related activities, the Natural Resources Conservation Service and the Farm Service Agency are primarily responsible for administering the Department’s wetlands programs. The mission of the Natural Resources Conservation Service (NRCS), formerly the Soil Conservation Service, is to assist in the conservation, development, and productive use of the nation’s soil, water, and related resources. NRCS provides technical and financial assistance to landowners to achieve conservation objectives; this assistance includes the restoration and enhancement of wetlands. NRCS is responsible for delineating wetlands to implement the 1985 Farm Bill (Swampbuster); the 1990 Food, Agriculture, Conservation, and Trade Act; and the 1996 Farm Bill. In addition, it administers the Wetlands Reserve Program, the Water Bank Program, and the National Resources Inventory. The Wetlands Reserve Program is a voluntary program to restore and protect wetlands on private property. Landowners have an opportunity to receive financial incentives to enhance wetlands in exchange for retiring marginal agricultural land. A landowner voluntarily limits future use of the land, yet retains private ownership. The landowner and NRCS develop a plan for the restoration and maintenance of the wetlands. According to agency officials, more than 600,000 acres had been enrolled in the program as of April 15, 1998, at a cost of about $500 million. Although not more than 975,000 acres can be enrolled in the program by the year 2000, the administration included a proposal in its Clean Water Action Plan to expand the Wetlands Reserve Program to allow the enrollment of up to 250,000 acres of wetlands each year. The Water Bank Program was established by the Water Bank Act in 1970. This program provides funds to purchase 10-year easements on wetlands and adjacent areas. It provides annual rental payments to landowners for preserving wetlands in important migratory waterfowl nesting, breeding, or feeding areas. The program focuses primarily on contracts with landowners in several central and western flyway states. Over $80 million was spent on the program in fiscal years 1990 through 1997. Although the last contract was awarded in 1995 and no new funding has been put into the program since then, a small amount of funds becomes available each year because of landowners’ withdrawals. These funds have been used to enroll a few new easements in North Dakota during fiscal years 1996 and 1997. Program funding will expire in fiscal year 2005. The National Resources Inventory (NRI) is an inventory that determines the conditions of land cover and use, soil erosion, prime farmland, wetlands, and other natural resource characteristics on nonfederal rural land in the United States. Inventories have been conducted at 5-year intervals by NRCS. The program is currently making the transition to an annualized inventory process. The 1992 inventory covered some 800,000 sample sites representing the nation’s nonfederal land—some 75 percent of the nation’s land area. The purpose of the NRI is to provide information that can be used for effectively formulating policy and developing natural resource conservation programs at the national and state levels. The Farm Service Agency (FSA) manages the Conservation Reserve Program. The Conservation Reserve Program is a voluntary program offering rental payments to farmers to protect highly erodible and environmentally sensitive cropland. Wetlands and land to be restored to wetlands are enrolled through a competitive bid process in which offers are evaluated on the basis of their relative environmental benefits. An estimated 692,000 acres of wetlands are currently protected or have been restored by the program. The Conservation Reserve Program also protects a significant amount of upland acres associated with wetlands. The program provides an estimated $40 million each year for the protection of wetlands. In addition, the Farm Service Agency administers the Conservation Reserve Enhancement Program under the Conservation Reserve Program. This program provides the opportunity to partner with state governments to target the most environmentally critical areas. FSA also administers and enforces the Swampbuster provision of the 1985 Farm Bill. FSA provides wetlands information to producers and third parties, monitors compliance with regulations, responds to public complaints and producers’ appeals of FSA decisions, and deals with violations of the regulations. In each state, FSA’s operations are carried out in conjunction with a state committee appointed by the Secretary of Agriculture. In each of over 3,000 agricultural counties throughout the United States, a county committee is responsible for the local administration of FSA’s operations. The Forest Service administers over 191 million acres of land containing an estimated 9.1 million acres of wetlands. The majority of these are in Alaska and in National Forests east of the Mississippi. The Forest Service is headed by a Chief who has six deputies reporting to him. Three of the six deputies have areas of responsibilities that include wetlands program elements. The National Forest and Grasslands have an active wetlands restoration program in wetlands assessment, restoration, and compliance. Many staff share these responsibilities: Watershed and Air Management; Wildlife, Fish and Rare Plants; and Range Management are the most active in wetlands assessment and restoration. The Forest Service has recently joined in partnership with the Department of the Interior’s Bureau of Land Management in instituting an assessment method called Proper Functioning Condition in the western states to assess the condition of riparian (stream-side) areas and wetlands. The Forest Service also provides USDA leadership for forestry on nonfederal lands with an emphasis on management and protection of the estimated 52 million acres of nonindustrial private forest lands, including wetlands, in the contiguous United States. The Forest Service also works with other USDA agencies in implementing the Wetlands Reserve Program. The Forest Service also has a program of research on forested wetlands that emphasizes developing and testing management that restores and maintains wetlands. Riparian ecosystems and their associated wetlands are studied in many parts of the country. Region-specific research focuses on southern coastal plain wetlands, south-central bottomland hardwood swamps, northern peatlands in the Great Lakes states, and wetlands in interior Alaska. Overall, the Forest Service employed 93 full-time-equivalent staff for a total wetlands-related program cost of $16.5 million in fiscal year 1997. The Agricultural Research Service (ARS) conducts research on the cost-effective practices and environmental benefits associated with maintaining and enhancing existing and constructed wetlands. ARS spent about $8.8 million (in 1997 constant dollars) and 20 scientific years (similar to full-time-equivalent staff-years) from fiscal year 1990 through 1997 on wetlands-related research. The Economic Research Service (ERS) performs research and policy analysis at the national level and identifies long-term trend information. ERS has studied wetlands-related issues since the mid-1970s when a major conversion of wetlands to cropland occurred. Although ERS does not appropriate funds specifically for wetlands-related research, an ERS official estimated that about 1.25 full-time-equivalent staff-years (about $120,000) are spent on wetlands-related research each year. The National Oceanic and Atmospheric Administration’s National Marine Fisheries Service is responsible for protecting and conserving living marine, estuarine, and anadromous fish resources and habitats. In addition to performing administrative, management, and regulatory wetlands functions, the Service analyzes and comments on construction proposals and applications for dredge and fill permits issued by the Corps of Engineers. It is also an active member along with the Army Corps of Engineers, Department of Agriculture’s Natural Resources Conservation Service, Department of the Interior’s Fish and Wildlife Service, and the Environmental Protection Agency in projects funded under the Coastal Wetlands Planning, Protection and Restoration Act of 1990. Through its Damage Assessment and Restoration Program, the National Oceanic and Atmospheric Administration uses monetary awards from polluters and other responsible parties to “restore, replace, or acquire the equivalent of” marine resources damaged by oil spills, hazardous releases, or other human-induced environmental disturbances. To date, this program has initiated restoration activities at over 25 sites around the country. The National Ocean Service, the Office of Oceanic and Atmospheric Research, and National Environmental Satellite, Data, and Information Service are also involved in wetlands-related activities. Wetlands-related responsibilities include implementing the Coastal Zone Management Program, improving the health of the nation’s estuaries and coastal habitats, and performing coastal land and ocean sensing mapping and monitoring. For example, the National Ocean Service’s National Estuarine Research Reserve Program has placed about 440,000 acres of estuarine waters, wetlands, and uplands into active management and stewardship. This is accomplished with the cooperation of the coastal states and territories and of constituent groups. All three military services were involved in a variety of wetlands-related activities in fiscal years 1990 through 1997. Although involved in restoration, creation/construction, research, and other wetlands-related activities, enhancement, mapping, inventorying, and delineation were the primary activities conducted during this period. The U.S. Army Corps of Engineers is the primary federal agency responsible for regulating wetlands development under section 404 of the Clean Water Act. Section 404 authorizes the Corps to issue or deny permits for the discharge of dredged or fill materials into U.S. waters. Of the approximately 12,000 to 15,000 project-specific permit applications the Corps evaluates each year, about 8,000 are issued and 200 denied. The remaining applications either qualify for authorization under a general permit, are withdrawn by the applicant, or are canceled by the Corps when the applicant fails to provide the information required for a decision. The Corps also verifies authorization of approximately 75,000 minor activities each year under the terms and conditions of regional and nationwide general permits. Under the President’s Wetlands Plan, issued in August 1993, the Corps was to establish an administrative appeals program whereby the public could appeal decisions of permit denials and jurisdiction determinations. The adoption of this program has been held up because of funding limitations; however, a partial appeals program, for denials, is expected to be in place in fiscal year 1999. Without such an appeals system, the public must resort to litigation to challenge a regulatory decision by the Corps. Both the Corps and EPA have enforcement responsibilities under section 404. EPA has statutory enforcement authority to deal with discharges of dredged or fill material where no permit has been obtained. The Corps has similar authority for dealing with violations of permit conditions. In January 1989, the Corps and EPA entered into a memorandum of agreement that established the Corps as the agency primarily responsible for initial investigations of reported violations. Both the Corps and EPA have authority to seek civil or administrative remedies for unauthorized discharges in wetlands. In addition, the agencies can, under appropriate circumstances, pursue criminal action in their enforcement cases. During fiscal year 1997, 6,300 unauthorized discharges were reported to the Corps. The Corps resolved 6,350 reported violations by requiring restoration of the damage to wetlands or other actions. (Some of those were reported in fiscal year 1996.) Some violations remained open at the end of fiscal year 1997. In addition to the unauthorized discharges, the Corps conducts compliance inspections of about 7,000 permitted activities per year. Almost 500 violations are noted as a result of these inspections. The Corps is also involved in ecosystem restorations, many of which address wetlands, riparian, and aquatic ecosystems. The Corps’ restoration projects may be linked with modifications to the operation or structure of existing projects. Dredged material can also be used to benefit aquatic ecosystems. The Kissimmee River project is one example of a major Corps effort to restore the environmental value of an area. The project will require over $247 million in federal funds and will enhance wetlands by establishing a more natural timing and flow through the Kissimmee basin. Public Law 101-646 stresses the nation’s concern for conserving and restoring coastal wetlands. Because Louisiana faces the most alarming wetlands loss rates, the law’s primary focus is on the restoration and protection of those wetlands. The law calls for a Louisiana Coastal Wetlands Conservation and Restoration Task Force made up of representatives from five federal agencies and the Governor of Louisiana to develop a comprehensive plan for addressing coastal Louisiana’s severe wetlands loss problem. Every year, this task force approves and provides to the Congress priority lists of projects. Since 1991, this law has provided an average of over $38 million annually in federal funding for Louisiana restoration projects. In addition to the activities conducted by the military services, the Strategic Environmental Research and Development Program funded a study in 1993 to, among other things, identify installation requirements relating to wetlands protection and management. A number of the Department of Energy’s (DOE) programs were involved in wetlands-related activities through field and operations offices, national laboratories, and research facilities during fiscal years 1990 through 1997.Although wetlands-related activities conducted by these program offices and facilities ranged from complying with regulations to education and public outreach, research and restoration were the primary wetlands-related activities conducted. Over half the estimated $46 million (in 1997 constant dollars) spent in fiscal years 1990 through 1997 were associated with these two activities. Bonneville Power Administration (BPA), one of the five federal power marketing agencies within DOE, supplies about half of the electricity used in the Pacific Northwest. Some of BPA’s power projects affect wetlands, and it has developed wetlands programs to protect, mitigate, and enhance fish and wildlife. BPA has spent about $10 million (in 1997 constant dollars) since 1990 on its wetlands-related activities, primarily to acquire land for mitigation purposes. The Federal Energy Regulatory Commission is an independent regulatory agency within the DOE whose responsibilities include approving the construction of natural gas pipelines; the rates for oil pipelines; the licensing and inspection of private, municipal, and state hydroelectric projects; and the overseeing of related environmental matters. The Commission’s wetlands-related activities involved reviewing proposed projects for environmental impacts, including impacts to wetlands. Western Area Power Administration, a federal power marketing agency within DOE, supplies hydroelectric power to over 600 wholesale power customers. Western Area Power’s wetlands-related activities consisted primarily of creating or constructing new wetlands as mitigation for expansion projects that affected existing wetlands. The Department of Housing and Urban Development has very little involvement in wetlands-related activities. However, in the few cases in which its programs are used to assist housing and community development for proposed projects that are located in wetlands, the Department requires compliance with the National Environmental Policy Act and Executive Order 11990. The Department could not provide information on the staffing and funding associated with these activities. The Department of the Interior has a number of programs addressing various aspects of wetlands ranging from the protection, restoration and enhancement efforts by the Fish and Wildlife Service to the research efforts of the U.S. Geological Survey. FWS is one of the primary agencies responsible for wetlands. In addition to reviewing section 404 permit applications and making recommendations to the Corps of Engineers on whether to approve a permit application and on any conditions that should be incorporated into it, FWS is active in programs that protect, restore, and enhance wetlands. The Partners for Fish and Wildlife Program began in 1987, restoring wetlands functions and values on private lands through voluntary cooperative agreements. Since then, the program has expanded to include habitat restoration of other important wildlife habitat, including native-grass prairie, riparian habitat, in-stream habitat, and declining-species habitats. The Partners for Fish and Wildlife Program has entered into over 17,000 voluntary cooperative agreements with private landowners for the purposes of restoring habitat. The program also provides technical assistance to other federal agencies with conservation programs, primarily the Department of Agriculture. In fiscal years 1990 through 1997, the program received $123.4 million in funds. To date, the Partners for Fish and Wildlife Program has restored over 360,000 acres of wetlands, 128,000 acres of prairie grassland, 930 miles of riparian habitat, and 90 miles of in-stream aquatic habitat. The North American Waterfowl Management Plan has the goal of restoring continental waterfowl populations to numbers seen in the 1970s. To do this, it joins the efforts of public agencies and private conservation groups by applying the joint venture concept to develop partnerships and matching grant funding arrangements to carry out wetlands protection and restoration. The initial plan was created in 1986 involving Canada and the United States; it was updated and expanded to include Mexico in 1994. There are 11 habitat joint ventures in the United States and 3 in Canada. The plan calls for 11.1 million acres of wetlands and associated uplands to be protected and 14.7 million acres to be restored or enhanced. Habitat goals for each of the plan’s joint ventures are identified in the plan update. Actual joint venture projects are funded individually by the joint venture partners involved. FWS receives some appropriations for associated administrative efforts. The North American Wetlands Conservation Act Grant Program was authorized by the North American Wetlands Conservation Act of 1989. The program encourages voluntary public-private partnerships to conserve North American wetlands ecosystems. Principal conservation actions include the acquisition, creation, enhancement, and restoration of wetlands and wetlands-associated habitat. From fiscal year 1991 through March 1998, 576 projects in the United States, Canada, and Mexico, involving over 900 partners, have been approved for funding. Approximately 3.7 million acres of wetlands and associated uplands have been acquired, restored, or enhanced in the United States and Canada, while nearly 20 million acres have been affected in large biosphere reserves through conservation education and management plan projects in Mexico. National Coastal Wetlands Conservation Grants are authorized by the Coastal Wetlands Planning, Protection and Restoration Act of 1990. The source of funding for the grant program is a portion of the revenues deposited in the Sport Fish Restoration Account of the Aquatic Resources Trust Fund. Program eligibility extends to all states bordering on the Atlantic, Gulf (except Louisiana), and Pacific coasts, as well as states bordering the Great Lakes and U.S. territories, trust areas, and Puerto Rico. The share of project costs funded by the federal grant cannot exceed 50 percent, unless the coastal state has established a trust fund or a fund derived from a dedicated recurring source of moneys for the purpose of acquiring coastal wetlands, other natural areas, or open spaces, in which case the federal share may be increased to 75 percent. Since 1992, the program has protected almost 64,000 acres of wetlands and associated uplands through acquisition and restoration. FWS administers the 92 million-acre National Wildlife Refuge System for the benefit of fish, wildlife, and plants and their habitats. The Service estimates that about one-third of these acres are wetlands, excluding tundra in Alaska. The 513 national wildlife refuges and 37 wetlands management districts, located in all 50 states, encompass a tremendous variety of wetland types providing important habitat for migratory birds, anadromous fish, and species threatened with extinction. Refuge managers use water control structures, moist soil management, prescribed burning, and other techniques to restore, maintain, and enhance refuge wetland habitats. The National Wetlands Inventory (NWI) program began in 1978 and has had two goals since its inception: to produce (1) detailed maps for the country and (2) comprehensive, statistically valid acreage estimates of the nation’s wetlands. The Emergency Wetlands Resources Act of 1986, as amended, required the Secretary of the Interior, acting through the Fish and Wildlife Service, to complete maps for the conterminous United States by September 30, 1998, and to update the report on wetlands status and trends on a 10-year cycle. To date, three congressional reports have been generated by the status and trends efforts. As the manager of more than 16 million acres of wetlands, the National Park Service is a key participant in the preservation, restoration, and management of wetlands habitats across the United States. Although many wetlands in National Park System units are in essentially pristine condition, others have been damaged by drainage, pollution, diking, and filling. In 1991, the National Park Service initiated a Service-wide program designed to enhance its wetlands protection, restoration, inventory, applied research, and education efforts. This program is implemented through project funding and technical assistance from the Service’s Water Resources Division. The mission of the Bureau of Land Management is to sustain the health, diversity, and productivity of the public land for the use and enjoyment of present and future generations. For riparian-wetlands areas, this involves inventory/classification, project development/maintenance, monitoring, protection/mitigation, and acquisition/expansion of riparian-wetlands areas through Land and Water Conservation Fund purchases and land exchanges. Bureau of Land Management field offices develop and carry out site-specific management needs, proposals, and work plans for a variety of wetlands projects, ranging from prescribed grazing management to protective enclosures around small springs, to larger wetlands development projects. The Bureau is also engaged in several joint venture partnerships that focus on regional wetlands protection and development relative to the North American Waterfowl Management Plan. The Bureau of Reclamation’s mission has evolved over the past 10 years from one focusing on the development of water resources and civil works construction projects to one emphasizing water resources management, protection, and development and maintenance and enhancement of existing facilities. The Bureau’s wetlands activities include compensatory mitigation required to address unavoidable impacts caused by the construction and operation of projects. Compensatory mitigation may entail wetlands restoration, enhancement, and/or development. The Bureau also voluntarily participates with cost-sharing partners in developing, restoring, and enhancing wetlands to establish and improve wetlands functions and values associated with its projects. The U.S. Geological Survey’s wetlands-related activities are predominately research and mapping; the agency does not directly carry out restoration, protection or enhancement efforts. Its primary efforts are focused on obtaining an increased understanding of the structure and function of wetlands, both as individual units and components of large hydrologic systems. In many cases, the scientific information produced feeds directly into the wetlands restoration and management activities of other agencies. Examples of research efforts include the following: Inventorying and monitoring Louisiana’s coastal wetlands. The Survey documented wetlands loss through a time series of habitat maps and reports and provides spatial databases for planning and monitoring large-scale wetlands restoration projects of the Coastal Wetlands Planning, Protection and Restoration Act. Science for the restoration of the south Florida, San Francisco Bay, and Chesapeake Bay ecosystems. Most of the wetlands work of the Integrated Natural Resources Science Program (formerly Ecosystem Program) was conducted in south Florida. The program’s costs were $3.7 million, $7.4 million, and $7.3 million for fiscal years 1995, 1996, and 1997 respectively. The Bureau of Indian Affairs administers and manages approximately 52 million acres of land held in trust by the United States for Native Americans. Most Indian land is located in arid regions not known for their wetlands values; however, approximately one million acres of trust land contain wetlands that possess significant fish and wildlife resources. Approximately 400,000 acres of wetlands are located on 18 Indian reservations in Minnesota, Michigan, and Wisconsin. Tribes in these three states, in conjunction with the North American Waterfowl Management Plan, have developed a consolidated set of wetlands management and development project proposals for their reservations. According to a Bureau official, there is no other budget or program for addressing wetlands located on reservations in other states. Approximately 34,000 acres of wetlands were restored, enhanced, created or constructed through 1997. The Bureau has no staff funded for this work. The Office of Surface Mining’s mission is to carry out the requirements of the Surface Mining Control and Reclamation Act of 1977, as amended, in cooperation with the states and tribes. The Office is responsible for ensuring that any wetlands that may be affected by mining are addressed in the permitting process, coordinated with the Corps of Engineers, and mitigated if necessary. Furthermore, the Corps’ nationwide permits require the Office of Surface Mining or the state regulatory authority to approve wetlands mitigation plans prior to submission. In addition, the abandoned mine land program has encouraged the construction and enhancement of wetlands as part of the Federal Reclamation Program and the abandoned mine land program’s state grant process. In fiscal years 1990 through 1997, the Office spent a little over $1 million dollars on wetlands-related activities. The Minerals Management Service manages the Outer Continental Shelf oil and gas program. The Service’s responsibilities, as set forth in the Outer Continental Shelf Act, include assessing the potential impacts of oil and gas activities on the coastal environment, including wetlands, and managing oil and gas activities to minimize any impacts. Most of the major wetlands studies sponsored by the Service were funded prior to fiscal year 1990, although, in fiscal years 1990 through 1997, approximately $507,000 was spent on wetlands research in the Gulf of Mexico. Furthermore, in fiscal year 1997, the Service initiated a cooperative study of coastal wetlands impacts related to pipeline canal widening rates with the Biological Resources Division of the U.S. Geological Survey. The Minerals Management Service funding of this 4-year study was $106,000. Within the Department of Justice, lawyers in the Environment and Natural Resources Division and the 94 United States Attorney Offices handle all wetlands-related litigation, including affirmative and defensive civil cases and prosecution of criminal violations. This work includes litigation to enforce the law when individuals and/or companies fill wetlands without a permit, to defend legal challenges to section 404 permits that have been issued by the government, and to defend inverse condemnation cases filed against the government because of permit decisions. An Environment Division official estimated that in fiscal years 1990 through 1997, its attorneys addressed 1,010 cases at an expense of $19 million (in 1997 constant dollars). In fiscal years 1992 through 1997, the U.S. Attorney Offices addressed 67 section 404 cases. The Department of State supports the Ramsar Wetlands Convention through voluntary contributions to (1) the Ramsar Bureau’s core budget, (2) Conference of Parties meetings, and (3) wetlands projects. The Convention on Wetlands, adopted in Ramsar, Iran, in 1971, is the only international accord dedicated to the protection of wetlands. The 106 nations that are parties to the Ramsar Convention have designated over 900 wetlands sites of international importance to promote their sustainable use and management. The United States contributes about 25 percent of the total Ramsar Bureau’s budget. The Department of State also provides funding for the Wetlands for the Future project, whose goals are to train wetlands managers and improve their expertise in wetlands conservation in the Western Hemisphere. The Department spent about $4.3 million (in constant 1997 dollars) on its wetlands-related activities in fiscal years 1991 through 1997. The United States Coast Guard must comply with the provisions of sections 404 and 401 of the Clean Water Act and Executive Order 11990. According to the Coast Guard, consideration is given to the impacts on wetlands before any new real property acquisition, new construction projects, or maintenance projects for its shore facilities is undertaken. However, although the Coast Guard conducted a number of wetlands-related activities in fiscal years 1990 through 1997, including restoration, enhancement, and creation as mitigation, it does not keep records of such activities or track the funding or staffing associated with them. The Federal Highway Administration administers the Federal Aid and Federal Lands Highway Program. As part of the highway development process, state departments of transportation carry out components of the wetlands management and compliance process, including identification, delineation, and mitigation of highways’ impacts on wetlands. Neither the Federal Highway Administration nor the state departments of transportation regulate wetlands. However, time is spent in the regulatory compliance process performing tasks primarily for the section 404 process. Most of the over $523 million (in constant 1997 dollars) spent by the Federal Highway Administration in fiscal years 1990 through 1997 was related to mitigation for highway construction. As one of the primary agencies responsible for wetlands, the Environmental Protection Agency has regulatory and enforcement responsibilities under section 404 of the Clean Water Act. EPA also has regulatory functions that include the control of discharges of pollutants in all waters of the United States, including wetlands. In addition, EPA performs wetlands-related research. EPA also has established programs that improve wetlands protection by increasing the emphasis on watershed or ecosystem management approaches; support and improve capabilities of state, tribal, and local wetlands programs; provide technical assistance, including scientific information and tools; and support outreach and education to meet the needs of its partners. As part of these programs, EPA provides wetlands grants to assist state, tribal, and local government organizations in building their wetlands expertise, capabilities, and programs. EPA has spent $241 million (in constant 1997 dollars) and over 1,450 full-time-equivalent staff-years on its wetlands-related activities in fiscal years 1990 through 1997. The Federal Emergency Management Agency (FEMA) has no specific wetlands programs, but it does operate programs that affect wetlands, such as the Hazard Mitigation Grant Program, the Public Assistance Program, and the National Flood Insurance Program. FEMA’s Public Assistance Program provides funding to state and local governments and nonprofit entities to repair damaged facilities and also funds other disaster response and recovery activities, such as debris removal and disposal. The Hazard Mitigation Grant Program assists states and local communities to implement long-term hazard mitigation measures that substantially reduce the risk of future damage. In addition, FEMA’s National Flood Insurance Program maps flood hazard areas and makes flood insurance available only in those communities that adopt and enforce floodplain management ordinances that meet or exceed minimum standards. Many of these floodplains are also wetlands. The National Flood Insurance Program’s Community Rating System provides discounts on flood insurance premiums that take actions beyond the program’s minimum requirements. Federal Emergency Management Agency staff also review some of the Corps’ permitting decisions relative to section 404 of the Clean Water Act and other agencies’ National Environmental Policy Act documents. The General Services Administration’s wetlands-related activities are related to fulfilling the “only practical alternative” requirement of Presidential Executive Order 11988, Floodplain Management. Agency officials responsible for leasing actions or for site acquisitions must certify that the site selected is the only practical alternative despite the impacts on wetlands. In fiscal years 1990 through 1997, the administration spent about $90,000 (in constant 1997 dollars) and no identified full-time-equivalent staff-years on wetlands-related activities. The International Boundary and Water Commission is a bi-national commission created by the governments of the United States and Mexico to apply the provisions of various boundary and water treaties and to settle differences arising from such applications. The U.S. Section’s wetlands responsibilities include maintaining the Lower Rio Grande Flood Control Project by mowing and clearing brush growing within the river floodway, where needed. The Commission’s U.S. Section is also creating a wildlife refuge primarily focused on waterfowl habitat in a moist soil managed wetlands in El Paso, Texas. About $834,000 and six full-time-equivalent staff-years were associated with these wetlands-related activities in fiscal year 1997. The National Aeronautics and Space Administration is involved in a number of wetlands-related activities. These activities include restoration, construction, research, mapping, delineation, and education. In fiscal years 1990 through 1997, the agency spent about $1.6 million (in constant 1997 dollars) and six full-time-equivalent staff-years on these activities. The National Science Foundation was involved in a number of wetlands research and research-related activities. Although research accounts for a majority of its activities, the National Science Foundation was also involved in wetlands mapping, restoration, and education/public outreach activities. In fiscal years 1990 through 1997, the National Science Foundation spent almost $47 million (in constant 1997 dollars) and 27 full-time-equivalent staff-years on its wetlands research and research-related activities. The Smithsonian Institution’s wetlands-related activities range from acquiring easements to education/public outreach efforts. The Smithsonian Environmental Research Center provides public education and professional training on the tidal and freshwater wetlands of the Chesapeake Bay region. The Tennessee Valley Authority has been involved primarily in wetlands research and, to a limited extent, other activities such as restoration, mapping, inventorying, and delineation. In fiscal years 1990 through 1997, the Tennessee Valley Authority spent about $15 million (in constant 1997 dollars) and 115 full-time-equivalent staff-years on its wetlands-related activities. Totals may not add because of rounding. N/A indicates that the agencies did not provide the requested information. With the reorganization of the Department of Agriculture in 1994, the management of some programs, such as the Water Bank Program, were transferred from the Farm Service Agency (formerly the Agricultural Stabilization and Conservation Service) to the Natural Resources Conservation Service (formerly the Soil Conservation Service). Funding associated with NRCS’ wetlands-related activities do not include the cost of performing wetlands delineations under the Swampbuster provision from fiscal year 1992 through 1996. According to NRCS officials, the costs of performing wetlands delineations were not tracked during this period. The costs of wetlands delineations are reflected in the totals shown for the remaining years. Funding associated with these efforts in 1990, 1991, and 1997 was $73.6, $38.1, and $33.8, respectively. The funding data shown includes the expenditures associated with the Corps’ regulatory program. Although most of the Corps’ regulatory funding is devoted to the section 404 program, the costs of regulating other activities are also included. However, the Corps does not separately track the costs of regulating wetlands. The staffing data shown for the Corps are primarily for the regulatory program and do not include staff involved in the Corps’ other wetlands-related activities. Less than $.01 million These numbers represent the resources associated with the efforts of the Environment and Natural Resources Division’s headquarters staff. Although United States Attorney Offices are also involved in prosecuting and defending section 404 cases, the Executive Office of United States Attorneys could not provide funding and staffing data. The following are GAO’s comments on the Clinton Administration’s Interagency Wetlands Working Group’s letter dated May 15, 1998. 1. The report notes that the agencies are involved in wetlands-related activities to varying degrees. Not only do we point out that 6 of the 36 agencies are the primary agencies involved in and responsible for implementing wetlands-related programs, but we state that these 6 agencies account for more than 70 percent of the funding and 65 percent of the staffing associated with such activities. In addition to the five agencies cited by the Interagency Wetlands Working Group as the primary wetlands agencies, we included the Department of Agriculture’s Farm Service Agency. During the period covered by our review, the Farm Service Agency accounted for a significant amount of the funding and staffing associated with wetlands-related activities because of its involvement in such programs as the Conservation Reserve Program and Swampbuster. In addition, appendix II of the report contains detailed information on the types of wetlands-related activities that each agency conducts. However, to further clarify the roles of the various federal agencies in wetlands-related activities, we revised the caption in the report to highlight that agencies are involved to varying degrees and included a statement in this section to indicate the nature of the wetlands-related activities of the other 30 agencies. 2. We reviewed the administration’s national wetlands plan during the course of our work and made several references to it in our report. However, we did not include a more in-depth discussion of the plan because the administration’s national wetlands plan dealt primarily with streamlining and improving the wetlands regulatory program, not improving wetlands data. 3. The report acknowledged not only the different mandates and methods used by the National Wetlands Inventory and the National Resources Inventory, but also recognized that both have reported a decline in the rate of wetlands loss. However, as we point out, the estimates of wetlands acreage made by the two inventories are not completely consistent—a fact also recognized in the administration’s Clean Water Action Plan. Furthermore, the previous efforts of interagency task forces as well as the current efforts by the Interagency Wetlands Working Group, taken in response to the Clean Water Action Plan, emphasize the need to reconcile the differences in the estimates of the two inventories. We added information to the report recognizing the effort undertaken by the Interagency Wetlands Working Group to reconcile the two inventories and produce a single wetlands status and trends report. We also included a copy of the Interagency Wetlands Working Group’s May 1998 action plan in appendix VII. 4. The agencies provided editorial changes, technical corrections, and clarifying information that have been incorporated in our final report where appropriate. 5. We revised the report to reflect the development of the Interagency Wetlands Working Group’s May 1998 action plan. This plan addresses how one of the three actions intended to improve wetlands data contained in the administration’s 1998 Clean Water Action Plan will be accomplished. Details of how the other actions will be accomplished have not been developed. 6. We did not revise the title or first sentence of this section as suggested by EPA. However, its comment indicates that EPA generally agrees that the consistency and reliability of wetlands acreage data reported by federal agencies are questionable. 7. We revised this sentence to clarify our point that the estimates produced by the two inventories are not completely consistent. As we previously noted, the report already acknowledges the different mandates and methods used by the National Wetlands Inventory and the National Resources Inventory as well as recognizing that both have reported a decline in the rate of wetlands loss. 8. The numbers shown in table 1 were provided by FWS and NRCS, respectively. The numbers shown in this table served as the basis for FWS’ September 17, 1997, news/press release in which the Service reported that the annual rate of wetlands loss had declined to about 117,000 acres. Subsequent to receiving agency comments on our draft report, we contacted FWS and were told that the FWS numbers shown in the table were correct and had not been withdrawn. However, FWS noted that in its final report only one wetlands loss number will be shown. The losses attributed to the various causes, e.g., agriculture, development, etc., will not be reported. In addition, as we have previously noted, we revised our report to reflect the recent efforts undertaken by the Interagency Wetlands Working Group to reconcile the differences in the two inventories’ estimates and produce a single wetlands status and trends report. 9. The information presented in this section is merely to provide the purpose(s) for which the various task forces were established. Therefore, we did not add the additional material provided by EPA. 10. We did not revise the title of our report because we believe that it accurately reflects the current situation. Although we recognize that the administration has recently undertaken efforts to resolve the wetlands data problems identified in our report, these actions will not be completed for several years. 11. The estimates presented in our report for the NRI are the 1982-1992 NRI wetlands estimates. As noted in our report, questions and concerns about the NRI’s 1992 estimates were raised by officials from both the National Wetlands Inventory and EPA. 12. The section in question provides supporting details for our finding that the consistency and reliability of the estimates made by the two federal resource inventories and the data reported by the agencies on their accomplishments are questionable. USDA may be correct in its assertion that users of inventory data incorrectly add figures to the inventory estimates and cause overstating of accomplishments. However, as this section points out, the current reporting practices of the agencies include the double counting of accomplishments as well as a lack of consistency in the use of terms and the inclusion of nonwetlands acreage. 13. We revised the report to reflect the development of an action plan by the Interagency Wetlands Working Group. However, although the action plan addresses one of the actions contained in the administration’s Clean Water Action Plan—the development of a single wetlands status and trends report—it does not address how the administration plans to accomplish the other actions it announced. In addition, the success of the Working Group’s efforts will require a long-term commitment as well as considerable time and effort by the agencies. We have therefore revised our conclusions to reflect that although the administration has undertaken efforts, much remains to be done before the administration has resolved the wetlands data problems identified in our report. 14. As we point out in our report, not only did EPA express concern about the estimates of both inventories, but officials from each of the agencies responsible for the inventories have questioned the estimates of the other. The following are GAO’s comments on the Department of Commerce’s letter dated May 15, 1998. 1. Appendix I is an update of the statutes presented in our previous report Wetlands Overview: Federal and State Policies, Legislation, and Programs (GAO/RCED-92-79FS, Nov. 22, 1991). Because the purpose of the appendix is to highlight major statutes dealing with wetlands issues, we did not revise our report. Concerned about the lack of consolidated information on the federal commitment to wetlands, you asked us to (1) develop an inventory of the federal agencies involved in wetlands-related activities and the funding and staffing associated with their activities during fiscal years 1990 through 1997 and (2) determine if the wetlands data reported by these agencies are consistent and reliable. To develop an inventory of federal agencies involved in wetlands-related activities, we reviewed studies and reports on wetlands-related policies and programs. We also contacted officials from six agencies—the Army Corps of Engineers, the U.S. Department of Agriculture’s Farm Service Agency and the Natural Resources Conservation Service, the Department of Commerce’s National Oceanic and Atmospheric Administration, the Department of the Interior’s Fish and Wildlife Service, and the Environmental Protection Agency. These agencies were identified in a prior GAO report as the federal agencies primarily responsible for administering wetlands-related programs. We asked officials from these agencies to identify other federal agencies that had either requested technical assistance or had been involved in joint wetlands projects. Thirty additional federal agencies were identified through these efforts. To identify the funding and staffing associated with federal agencies’ wetlands-related activities in fiscal years 1990 through 1997, we contacted officials from the Army Corps of Engineers, five U.S. Department of Agriculture agencies, eight Department of the Interior agencies, the Department of Justice, the Environmental Protection Agency, the Federal Emergency Management Agency, and the National Oceanic and Atmospheric Administration. We obtained and reviewed documentation on the wetlands-related activities conducted and the funding and staffing associated with these efforts. We also surveyed 18 other federal agencies identified as being involved in wetlands-related activities to determine the extent of their involvement and to obtain information on the funding and staffing associated with their efforts. We attempted to obtain information on the actual expenditures and full-time-staff equivalents associated with the agencies’ wetlands-related activities. However, because some of the agencies do not track their wetlands activities separately or have integrated their wetlands-related activities into other program activities, the agencies were not always able to document the resources expended. In most of these instances, the agencies provided estimates of funding and staffing associated with their efforts. Because the volume of data collected would have required a significant investment of time and resources, we did not verify the completeness, accuracy, and reliability of the data provided. We attempted to reconcile inconsistencies in the data provided. However, reconciliation was not always possible because many of the agencies do not have a focal point for wetlands or, in some cases, a clear understanding of all wetlands-related activities occurring within the agency and the associated funding and staffing. To determine the consistency and reliability of wetlands acreage data reported by the agencies, we interviewed officials and obtained and reviewed documentation on two federal resource inventories—the National Wetlands Inventory and the National Resources Inventory maintained by Interior’s Fish and Wildlife Service and Agriculture’s Natural Resources Conservation Service, respectively. In addition, we discussed and reviewed documentation on the practices used by the federal agencies to report their wetlands accomplishments. We also discussed recently announced initiatives to improve wetlands data with officials of the Federal Geographic Data Committee’s Wetlands Subcommittee and the Deputy Assistant Secretary of the Army (Civil Works) who chairs the White House’s Interagency Wetlands Working Group. To obtain additional perspectives on the various wetlands-related activities, we visited Coastal Wetlands Planning, Protection and Restoration Act projects in Louisiana and met with responsible federal and state officials to discuss the program’s operations. Alan R. Kasdan The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO: (1) developed an inventory of the federal agencies involved in wetlands-related activities and the funding and staffing associated with their activities during fiscal years (FY) 1990 through 1997; and (2) determined if the data on wetlands acreage reported by these agencies are consistent and reliable. GAO noted that: (1) at least 36 agencies conducted wetlands-related activities during FY 1990 through FY 1997; (2) the total funding associated each year with the agencies' efforts ranged from about $508 million in FY 1990 to about $787 million in FY 1997; (3) staffing associated with the agencies' activities during this period ranged from about 3,271 full-time-equivalent staff-years in FY 1993 to about 4,308 full-time-equivalent staff-years in FY 1997; (4) six agencies were primarily involved in and responsible for implementing wetlands-related programs; (5) these six agencies accounted for more than 70 percent of the funding and 65 percent of the staffing associated each year with such activities; (6) the consistency and reliability of wetlands acreage data reported by the federal agencies are questionable; (7) the Fish and Wildlife Service and the Natural Resources Conservation Service maintain resource inventories that provide estimates of the nation's remaining wetlands acreage, annual rates of wetlands gains and losses, and the primary cause(s) for losses; (8) although both inventories have reported that the rate of wetlands loss has declined, the inventories' estimates are not completely consistent; (9) a single set of wetlands acreage numbers that could be used to evaluate the progress made in achieving the goal of no net loss of the nation's remaining wetlands is not available; (10) officials from each of the agencies have questioned the estimates made by the other, and the Environmental Protection Agency has expressed concern about both inventories; (11) the agencies' current reporting practices do not permit the actual accomplishments of the agencies to be determined; (12) since 1989, several interagency groups have attempted to improve wetlands data; (13) because their efforts have not resolved these problems, the administration recently announced new efforts to improve wetlands data; (14) in May 1998, the administration issued a plan to accomplish a key action--the development of a single wetlands status and trends report; and (15) as of June 10, 1998, details have not yet been developed on how the other actions announced by the administration will be accomplished.
Military test and training ranges are used primarily to test weapon systems and to train military forces. Test ranges are used to evaluate warfighting systems and functions in a natural environment and under simulated operational conditions. Training ranges include air ranges for air-to-air, air-to-ground, drop zone, and electronic combat training; live-fire ranges for artillery, armor, small arms, and munitions training; ground maneuver ranges to conduct realistic force-on-force and live-fire training; and sea ranges to conduct ship or submarine maneuvers. In February 2014, DOD reported to Congress that it had 533 test and training ranges throughout the United States and overseas. These included 456 Army ranges, of which 384 were in the United States; 23 Navy ranges, of which 18 were in the United States; 40 Air Force ranges, of which 35 were in the United States; and 14 Marine Corps ranges, of which 13 were in the United States. Figure 1 shows the location of major DOD test and training ranges throughout the United States as of June 2014. Before DOD can determine whether a project or transaction poses a potential security threat to a range by providing a foreign entity a permanent platform for observing operations, it must first become aware of the proposed project or transaction. Multiple federal entities may be involved in identifying and approving potential business activities near DOD ranges. DOD, working with these federal entities, uses multiple methods to determine what activities are occurring in proximity to its ranges. None of these methods, with the exception of the Committee on Foreign Investment in the United States (CFIUS), discussed below, was designed to consider security concerns. The following entities and processes are available to DOD to become aware of and gather information on projects located near ranges. CFIUS, an interagency committee chaired by the Department of the Treasury and including DOD as a member, reviews certain covered transactions to assess the impact on national security of foreign control of U.S. companies, such as by considering the control of domestic industries and commercial activity by foreign citizens as it affects the capability and capacity of the United States to meet the requirements of national security. DOD has the opportunity to comment on these transactions, including raising any security concerns. For more information on CFIUS, see appendix II. The Bureau of Land Management within the Department of the Interior administers over 245 million acres of federal land for a variety of uses, including energy development, recreation, and timber harvesting. The Bureau issues a wide variety of permits, licenses, or leases for use of public land, including permits and leases for energy development, and administers mining claims. According to Bureau of Land Management and DOD officials, local Bureau of Land Management personnel may work with DOD installations within their jurisdictions to notify them of projects in proximity to the installation and test and training ranges. In some cases, the office may notify the installation when leases are issued or projects are proposed in proximity to test and training ranges. The Bureau of Ocean Energy Management within the Department of the Interior promotes energy independence and economic development and manages the natural resources of the Outer Continental Shelf, including oil and gas, marine minerals, and renewable energy. Under a 1983 Memorandum of Agreement, the Bureau and DOD consult to resolve conflicts between Outer Continental Shelf exploration and development and the requirements for DOD to use the Outer Continental Shelf for national defense and security. Following these consultations, DOD and the Department of the Interior agree on areas that may require deferral from leasing or that can be leased subject to lessee advisories or lease stipulations allowing for joint use. The Bureau of Safety and Environmental Enforcement within the Department of the Interior works to promote safety, protect the environment, and conserve resources offshore through regulatory oversight and enforcement. Key functions of the Bureau include oil and gas permitting, facility inspections, regulations and standards development, safety research, data collection, technology assessments, field operations, incident investigation, environmental compliance and enforcement, and oil spill prevention and readiness. The Federal Aviation Administration within the Department of Transportation works to provide a safe and efficient aerospace system and reviews proposed structures for obstruction concerns. In addition, parties proposing any project over 200 feet in height or within certain distances of an airport or runway are required by law and regulation to provide notice and certain project information to the Federal Aviation Administration. As part of its evaluation process, the Federal Aviation Administration’s obstruction evaluation system automatically notifies interested agencies, including DOD and the individual military services, based on the agencies’ preferences. DOD’s Siting Clearinghouse, which was set up to work with renewable energy project developers to mitigate encroachment concerns at DOD installations, is automatically notified about all renewable energy projects filed with the Federal Aviation Administration. The National Environmental Policy Act of 1969 process requires environmental reviews of certain actions on federally controlled land. As part of this process, the public must be notified of impending action on federal land and is invited to comment. DOD may be included as a cooperating agency when a project is located near a DOD installation or when there is an identified DOD interest involved. Community Planning and Liaison Officers at Navy and Marine Corps installations establish relationships with nearby communities and local governments, and provide a mechanism by which the installations can become aware of and address any concerns stemming from proposed projects near ranges. These entities and processes may apply to a wide variety of activities that can occur in proximity to DOD test and training ranges, including renewable and conventional energy projects, mineral extraction (mining), and oil and natural gas exploration. For example, the Bureau of Land Management grants mining rights on federal land near DOD ranges. Moreover, as discussed above, the Bureau also administers minerals mining claims, including those in proximity to DOD’s test and training ranges. Figure 2 shows the mining claims on federal land outside of the Fallon Training Range Complex in Nevada. Similar to the large number of mining claims near the Fallon Range Training Complex, there is also extensive oil and gas exploration in the Gulf of Mexico near many onshore Navy and Air Force installations, including Eglin Air Force Base, where DOD set up the Integrated Training Center for the F-35 Joint Strike Fighter. In 2006, Congress passed the Gulf of Mexico Energy Security Act of 2006of the Gulf of Mexico to new oil and gas leasing, but also placed a moratorium on oil and gas leases in portions of the gulf, in part to avoid which opened several areas interfering with DOD’s training mission.gas activity as of October 2013 in the Gulf of Mexico and the moratorium area. Our prior work has shown that utilizing a risk management approach allows an agency to more effectively prioritize its resources and enhance its ability to respond to a threat. Under DOD Instruction 3020.45, DOD utilizes a risk management approach to manage its critical infrastructure program. According to DOD, risk management is the process of identifying, assessing, and controlling risks arising from operational factors and making decisions that balance risk cost with mission benefits. A key step in this approach is to conduct a risk assessment to provide a way to continuously evaluate and prioritize risks and recommend strategies for mitigation. DOD’s risk assessment process has three core elements: criticality, vulnerability, and threats. Criticality identifies the consequence of the loss of a particular asset based on national security concerns or the impact to DOD’s missions. A criticality assessment identifies key assets and infrastructure that support DOD missions, units, or activities and are deemed mission critical by military commanders or civilian agency managers. Vulnerability is a weakness or susceptibility of an installation, system, asset, application, or its dependencies that could cause it to suffer a degradation or loss as a result of having been subjected to a certain level of threat or hazard. A vulnerability assessment is a systematic examination of the characteristics of an installation, system, asset, or its dependencies to identify vulnerabilities. Threats refer to an adversary having the intent, capability, and opportunity to cause loss or damage. DOD has not conducted a risk assessment that includes prioritizing ranges based on mission criticality, determining their vulnerabilities to foreign encroachment, and assessing the degree to which foreign encroachment could pose a threat to the mission of the ranges. As a result, the department does not know the extent to which foreign encroachment poses a threat to its test and training ranges. Neither DOD nor the services have determined which of their ranges are the most critical to protect or assessed any vulnerabilities and threats posed by foreign encroachment. As discussed above, utilizing a risk management approach, which includes conducting a risk assessment, allows an agency to more effectively prioritize its resources and enhance its ability to respond to a threat. A DOD instruction governing its critical infrastructure program states that determining the criticality of key assets is a core element of conducting a risk assessment.provides a framework that could be used to manage critical infrastructure across the department, it does not specifically mention risk assessment in relation to foreign encroachment. Rather, it establishes policy to manage the identification, prioritization, and assessment of defense critical infrastructure as a comprehensive program. Therefore, this instruction could be used by DOD as a model for how to deal with the issue of foreign encroachment. Navy and Air Force officials said that the lack of an established methodology or criteria, as well as the unique mission capabilities of each range, make it difficult to determine the relative criticality of each range as it relates to foreign encroachment, including which ranges would be the most valuable collection points for foreign adversaries trying to gather intelligence and which ranges house the most sensitive test and training activities. In addition, the services do not have guidance on how to conduct such an assessment because the issue of foreign encroachment is new. However, DOD has resolved similar challenges in the past. For instance, in an October 2009 review of DOD’s management of electrical disruptions, we found that DOD had not developed guidelines for addressing the unique challenges related to conducting some vulnerability assessments of electrical power assets. We recommended that DOD develop explicit guidelines, based on existing Defense Critical Infrastructure Program guidance, for assessing critical assets’ vulnerabilities to long-term electrical power disruptions. with this recommendation and developed a tool for assessing critical assets’ vulnerabilities to power disruptions. Similarly, specific guidance on foreign encroachment could assist DOD and the services in managing this issue. GAO, Defense Critical Infrastructure: Actions Needed to Improve the Identification and Management of Electrical Power Risks and Vulnerabilities to DOD Critical Assets, GAO-10-147 (Washington, D.C.: Oct. 23, 2009). guidance designed to assess the criticality of Navy ranges in terms of foreign encroachment and expect this guidance to be issued sometime during 2015; however, as of December 2014, little progress has been made in developing this guidance. These officials further stated that, once guidance is finalized, they intend to begin the assessment process. Officials told us that they expect that, once this assessment process is complete, it will be a critical component of any effort to prioritize ranges by their importance. This, in turn, could support any Navy efforts to address foreign encroachment by targeting counter-intelligence activities on the most critical ranges. According to DOD, another core element of a risk assessment is to determine vulnerabilities, or the weakness of an asset that could cause it to suffer a loss. DOD and the services have raised concerns about the level of vulnerability facing some of their test and training ranges with regard to foreign encroachment. Specifically, Navy and Air Force headquarters officials as well as officials from all four of the ranges in our review told us that they had concerns about the number of investment- related projects by foreign entities occurring near their respective ranges—projects that they stated could pose potential security threats. Those officials told us that they were particularly concerned that foreign entities may have an increased ability to observe sensitive military testing or training activities if they are able to establish a persistent presence outside the services’ test or training ranges. Further, officials at all four of the ranges in our review expressed such concerns to varying degrees. For example, officials from the Fallon Range Training Complex and the Nevada Test and Training Range, both of which are used to provide realistic air-to-ground combat training, told us that they have observed a number of energy development and mining projects near both ranges that may be owned or controlled by foreign entities. Officials at Eglin Air Force Base, where the Air Force conducts land, air, and water test and training, and at White Sands Missile Range, where the services evaluate new weapon systems, also expressed concerns about the potential for foreign entities to observe testing and training activities at their respective ranges. DOD officials noted, however, that the services have not conducted formal assessments to determine the extent to which these vulnerabilities exist at their ranges. According to DOD’s instruction, along with establishing the criticality and vulnerability of assets, the third core element of a risk assessment is to assess the threats and hazards. Counterintelligence officials from the services’ criminal investigation agencies said that they have conducted some threat or risk assessments on specific locations or installations, as well as investigated some individual instances of commercial activity. However, they have not conducted threat assessments focused on foreign encroachment across DOD’s test and training ranges. Although these counterintelligence officials have investigated some individual instances of commercial activity, they have not conducted a systematic assessment of this potential threat because in most of the cases that they have investigated, they have not seen evidence that foreign encroachment posed a threat to the range. Therefore, they were reluctant to assign additional resources to this issue. DOD officials stated that given the uncertainty surrounding this issue, a risk assessment would be beneficial. However, DOD has not taken steps to initiate such a risk assessment or established a time frame for doing so. Without guidance from DOD for the services to follow in conducting a risk assessment that establishes a time frame for completion, identifies critical ranges, and then assesses vulnerabilities and threats to these ranges, DOD may not be able to determine what, if any, negative impact foreign encroachment may be having on its test or training ranges. DOD does not have information that officials say they need, such as the ownership of companies conducting business on federally managed land near DOD’s ranges, to determine if specific transactions on federally owned or managed land pose a threat to ranges. Leading practices state that to support decision-making, it is important for organizations to have complete, accurate, and consistent information. Range officials at all four installations in this review stated that they need more specific information to determine whether an individual transaction poses a threat to their range. Further, DOD officials have identified some possible sources or methods for obtaining this information but have not formally collaborated with other federal agencies on how to gather this information. Collaboration can be broadly defined as any joint activity that is intended to produce more public value than could be produced when organizations act alone. Leading practices state that agencies can enhance and sustain collaborative efforts by engaging in several practices that are necessary for a collaborative working relationship. These practices include identifying and addressing needs by leveraging resources; agreeing on roles and responsibilities; and establishing compatible policies, procedures, and other means to operate across agency boundaries. In order for DOD to determine if an entity engaging in investment activities near one of its test or training ranges poses a potential risk for foreign encroachment, DOD officials said that they would need additional identifying information from governing federal agencies responsible for issuing public land-use permits or leases. Such information could include, for example, identification of any parent companies or whether a U.S.- based entity is owned or controlled by a foreign entity. Service headquarters and range officials at all four ranges in our review said that they generally have good informal working relationships with governing federal agencies that allow them to find out some information about transactions on federal land near the ranges and that federal agency officials, including those from the Bureau of Land Management or the Bureau of Ocean Energy Management, frequently contact them to informally let them know of a proposed transaction near the range. Despite these relationships, DOD and range officials expressed concerns that these governing agencies are not able to provide DOD with the necessary information to identify potential encroachment. Officials from the Bureau of Land Management, the Bureau of Ocean Energy Management, and the Bureau of Safety and Environmental Enforcement within the Department of the Interior and the Federal Aviation Administration within the Department of Transportation told us that they face legal, regulatory, or resource challenges that may prevent them from collecting information that is unrelated to their respective missions, leading to knowledge gaps that may be acceptable for approving leases or permits on federal lands but could adversely affect DOD’s ability to identify potential security threats near the ranges. For example: The Paperwork Reduction Act of 1980 requires, among other things, that agencies undertake a number of procedural steps before collecting information from the public, including justifying the need for such information collection to the Office of Management and Budget. As part of this process individual agencies are required to certify to the Office of Management and Budget, among other things, that proposed collections of information are necessary for the proper performance of the functions of the agency. As a result, Department of the Interior officials said that they generally limit the information they collect to what is directly tied to the agencies’ respective missions of effective land or resource management. Officials from the Bureau of Land Management said that the information that they are permitted to collect on potential lease-holders or permit applicants is prescribed by regulation based on a longstanding interpretation of their authorizing statute. Further, Federal Aviation Administration officials told us that while they have a process for collecting information about proposed structures that are more than 200 feet in height or are within certain distances of airports or runways, this process is designed to support the agency’s mission of maintaining a safe and efficient aerospace system, not to collect the information that DOD would need to help identify instances of foreign encroachment on its ranges. Therefore, officials from these agencies expressed that they believe they would need some type of change in either their authorizing statutes or regulations to be able to collect this information. Agency officials also raised resource challenges as an issue in collecting additional information. Department of the Interior officials expressed concerns that any changes to either their statutory authorities or implementing regulations in order to collect additional information may create additional costs to the Department of the Interior as its bureaus conduct their respective missions. The officials told us they recognize the challenges DOD faces in identifying potential cases of foreign encroachment, but they also said their agencies’ respective missions have little to do with national security issues and agency officials questioned whether, under current law, their appropriations could properly be used to finance data collection unrelated to their mission and for DOD’s exclusive use. These officials expressed concerns that changes to their authorities or additional requirements imposed upon them may be burdensome, given their limited available resources. DOD has had some success in obtaining information that could be used to identify activities that could provide opportunities for foreign encroachment, but has not discussed options for obtaining additional information with other federal agencies. However, as discussed above, DOD officials said that when they do find out that an entity proposing a project near a range is foreign-owned, they generally obtain information on an informal basis through developed interagency relationships and not through any systematic process. For example, at some DOD installations, officials work with Bureau of Land Management district and field office managers to receive notifications on or discuss projects that may have an impact on DOD activities or interests. At one location—Naval Air Station Fallon—the Navy and the Bureau of Land Management have established a military liaison position to provide further coordination on both Navy and Bureau of Land Management interests due to the large number of energy development and mining projects occurring near the Fallon Range Training Complex. This military liaison position is funded by the Navy and the duties of the liaison include coordinating with the Navy on use of public lands and providing advice on highly technical and complex programs. In addition, Bureau of Land Management and DOD installation officials said that other DOD installations have good working relationships with their local Bureau of Land Management offices to discuss issues of importance to both agencies. Through these relationships, individual ranges are often notified of potential transactions near the ranges, but due to reasons stated above, range officials at the four installations in our review stated that they still feel that they need additional information on the transactions to be able to assess whether a transaction poses a threat to the range. The Navy, through the Center for Naval Analysis, recently conducted a study on this issue and identified additional sources of information that DOD could possibly leverage, including the Bureau of Economic Analysis within the Department of Commerce. Because this is an emerging issue for DOD, DOD has not taken steps to fully identify all potential sources of information or to reach out to other federal agencies that may have this information to discuss options for obtaining it. Without engaging potential sources of information on commercial activities near its ranges, DOD is hindered in its efforts to determine if a project could present a threat to test or training range activities. DOD’s concerns about various forms of encroachment have been long- standing. As potential opportunities for foreign encroachment have presented themselves, some in DOD have become increasingly concerned about the potential vulnerability and risk to its domestic air, land, and sea test and training ranges from such encroachment. However, DOD has not determined the likelihood of foreign encroachment through persistent presence on federally owned or managed lands in proximity to the test and training ranges, versus other means that may give foreign adversaries the opportunity to observe new weapon systems and operational tactics. Although the Navy has taken steps to develop guidance on assessing the risk of foreign encroachment to its ranges, as of December 2014, this guidance has not been issued. Further, the other departments have not taken any steps toward developing this type of guidance. Without guidance from DOD for the military departments to follow in conducting a risk assessment—including a time frame for completion—that identifies critical ranges, then assesses vulnerabilities and threats to these ranges, DOD may not be able to determine what, if any, negative impact foreign encroachment may be having on its test or training ranges. In addition, without a means to collect more information on the entities conducting business in proximity to its ranges, DOD cannot adequately assess individual transactions as to their potential threat to a range. Because of the degree to which DOD and other agencies must manage legal, regulatory, and resource constraints in taking action to identify and address any significant encroachment concerns, it is critical that DOD have a complete picture of where it is at greatest risk, what information is needed to fully assess any risks, and what options are available to mitigate or manage risks in a manner that is consistent with DOD and other agencies’ missions and resources. To improve the ability of the Department of Defense and the military departments to manage the potential for foreign encroachment near their test and training ranges, we recommend that the Secretary of Defense, in consultation with the military departments, develop and implement guidance for assessing risks to test and training ranges from foreign encroachment in particular, to include: determining the criticality and vulnerability of DOD’s ranges and the level of the threat; and a time frame for completion of risk assessments. To identify potential foreign encroachment concerns on federally-owned land near test and training ranges, we recommend that the Secretary of Defense collaborate with the secretaries of relevant federal agencies, including at a minimum the Secretaries of the Interior and Transportation, to obtain additional information needed from federal agencies managing land and transactions adjacent to DOD’s test and training ranges. If appropriate, legislative relief should be sought to facilitate this collaborative effort. In a written response on a draft of this report, DOD concurred with both recommendations. In addition, the Department of the Interior and the Department of Treasury provided technical comments, which we incorporated in our report as appropriate. The Department of Justice and the Department of Transportation did not provide any comments. DOD’s comments are reproduced in their entirety in appendix III. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Defense, the Army, the Navy, and the Air Force; the Secretaries of Interior, Transportation, and Treasury; the Attorney General of the United States; and the Director, Office of Management and Budget. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4523 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To determine the extent to which DOD has conducted a risk assessment to identify the existence and extent of any threats posed by foreign encroachment to its domestic air, land, and sea test and training ranges, we reviewed statutes, regulations, and guidance pertaining to federal agencies’ oversight of transactions by private entities on air, land, and sea. We compared DOD’s efforts to key elements of conducting a risk assessment that we previously developed as well as criteria for identifying and protecting critical infrastructure that DOD uses in managing its Defense Critical Infrastructure Program. We also reviewed DOD counterintelligence guidance and intelligence reporting on surveillance threats to DOD activities and facilities. To understand DOD’s concerns related to the potential presence of foreign entities near its test and training ranges, we interviewed appropriate officials from the Office of the Secretary of Defense as well as the Departments of the Navy, Army, and Air Force. We interviewed officials from DOD and the services’ intelligence agencies, as well as the Defense Intelligence Agency and the Federal Bureau of Investigation, to understand the extent to which any foreign encroachment concerns raised are based on information provided by the intelligence community. We also interviewed appropriate officials from the entities that govern activities on federally managed land in order to understand how and the extent to which DOD works with civilian governing agencies to identify areas of potential foreign encroachment., i.e., the Bureau of Land Management and Bureau of Ocean Energy Management (both within the Department of the Interior), which have responsibility for approving and administering permits and leases for projects on public lands, and the Federal Aviation Administration (within the Department of Transportation), which is responsible for reviewing potential obstructions to aviation safety. Finally, we interviewed officials from the Department of the Treasury, which chairs the Committee on Foreign Investment in the United States (CFIUS). To determine the extent to which DOD has obtained information on specific transactions near test and training ranges that it needs to determine if these transactions pose a threat to the range, we interviewed officials from OSD and the military service headquarters, as well as military department intelligence agencies, the Defense Intelligence Agency, and the Federal Bureau of Investigation. We also interviewed officials from selected federal agencies including the Bureau of Land Management and Bureau of Ocean Energy Management within the Department of the Interior, the Federal Aviation Administration within the Department of Transportation, and the Department of the Treasury, who all have a role overseeing transactions on federal land surrounding DOD’s ranges. We compared DOD’s efforts in obtaining information to leading practices on decision making and collaboration from our prior work. For both objectives, we spoke with officials from selected DOD test and training ranges: the Nevada Test and Training Range, Nevada (Air Force); the Fallon Range Training Complex, Nevada (Navy); Eglin Air Force Base, Florida (Air Force); and White Sands Missile Range, New Mexico (Army). After discussions with DOD officials, we selected locations (1) that included at least one range from each military department, (2) where security encroachment from foreign countries on federally owned land near test or training ranges has been raised as a concern, and (3) where ranges were surrounded by federally controlled land or ocean areas, thus requiring coordination with other federal agencies. At the Nevada Test and Training Range and the Fallon Range Training Complex we also interviewed officials from the Bureau of Land Management and the Federal Bureau of Investigation, as these agencies have responsibilities for the approval of public-use leases and permits or domestic counterintelligence efforts outside of both of these locations, respectively. Because federally owned land is disproportionately located in the western United States, the majority of our visits and discussions were with ranges in that area. The information from these four ranges is not generalizable to all of DOD’s domestic ranges. We limited the scope of this engagement to projects in which the federal government plays a role in approving, evaluating, or permitting the project. In addressing our objectives, we contacted officials representing a wide range of organizations (see table 1). We conducted this performance audit from July 2013 to December 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The only formal option in regard to transactions involving foreign companies or entities that accounts for national security concerns is the Committee on Foreign Investment in the United States (CFIUS) process. However, the CFIUS process is limited in two main ways. First, CFIUS only reviews transactions that meet certain criteria. Specifically, the CFIUS process reviews covered transactions, which include any merger, acquisition, or takeover that results in foreign control of any person engaged in interstate commerce in the United States. However, there are also many types of non-covered transactions that could result in a foreign entity having access to or a persistent presence near DOD ranges. These non-covered transactions include starts-ups, as well as acquisitions of assets other than an interest in a U.S. company, such as equipment or intellectual property. In addition, foreign purchases or leases of private real property—for business or non-business uses—near installations would not be covered by CFIUS. Second, CFIUS primarily relies on voluntary reporting of transactions by the involved parties to bring covered transactions to its attention, although the President or any member of CFIUS can also initiate a review of a covered transaction should they discover its occurrence. In the absence of voluntary reporting by the parties involved or independent discovery of the transaction by CFIUS, however, covered transactions will not be reviewed. For covered transactions it does review, CFIUS determines the effects of the transaction on national security, which includes consideration of a number of factors, including the potential national security-related effects on United States critical infrastructure. After a review of a covered transaction is initiated, the Committee evaluates the transaction and then either approves the transaction, approves the transaction with mitigation or makes a recommendation to the President to block the transaction. In the case of transactions that the Committee approves with mitigation, the Committee and participating companies typically execute national security agreements that impose some type of limitations or monitoring of projects, such as limitations on the citizenship of employees of the company or reporting of visitation by foreign citizens. However, according to DOD and installation officials, these agreements are often difficult to enforce. Finally, if the President finds that (1) there is credible evidence that the foreign interest exercising control might take action that will impair the national security and that (2) other laws, in the judgment of the President, do not provide adequate and appropriate authority for the President to protect national security, then the President can direct that the transaction be suspended or prohibited. This has only happened rarely, though. For example, in 1990 the President ordered a foreign- owned company to divest its acquisition of a manufacturing firm producing metal parts and assemblies for aircraft, and in 2012 the President blocked a foreign acquisition of a U.S. energy firm that was constructing a wind-turbine plant near a specialized Navy training facility. In addition to the contact named above, GAO staff who made key contributions to this report include Maria Storts, Assistant Director; Mark Wielgoszynski, Assistant Director; Leslie Bharadwaja; Simon Hirschfeld; Terry Richardson; Amie Lesser; Erik Wilkins-McKee; Michael Willems; and Richard Winsor.
For many years, DOD has reported that it faces challenges in carrying out realistic training because of the cumulative result of outside influences—such as urban growth and endangered species habitat—that DOD refers to as encroachment. In January 2014, DOD reported concerns with security encroachment by foreign entities conducting business near its test and training ranges. GAO was mandated by the House Armed Services Committee report accompanying a bill for the National Defense Authorization Act for Fiscal Year 2014 to review encroachment on DOD's test and training ranges. This report examines the extent to which DOD has (1) conducted a risk assessment to identify the existence and extent of any threats of foreign encroachment and (2) obtained information needed on specific transactions to determine if they pose a threat. GAO reviewed statutes, regulations, and guidance on federal agency oversight of transactions on federal land. GAO interviewed DOD and service officials, as well as officials from other federal agencies identified by DOD as having a role in such transactions. The Department of Defense (DOD) has not conducted a risk assessment that includes prioritizing test and training ranges based on mission criticality, determining their vulnerabilities to foreign encroachment (i.e., foreign entities acquiring assets, such as mines or energy projects, or otherwise conducting business transactions near test and training ranges), and assessing the degree to which foreign encroachment could pose a threat to the mission of the range. Some DOD officials stated that they are concerned about foreign encroachment, which may provide an opportunity for persistent surveillance of DOD test and training activities. However, DOD has not prioritized its ranges or assessed such threats because, among other things, there is no clear guidance on how to conduct assessments of the risks and threats posed by foreign encroachment. Some DOD officials told GAO they have considered conducting such assessments, but DOD has not issued guidance directing the services to conduct these assessments. Officials from the Navy and the Air Force stated that given the unique nature of each range, it would be difficult to assess their criticality. However, Navy officials stated that they had expected to issue guidance for conducting risk assessments sometime in 2015. Without clear guidance from DOD for the services to follow in conducting a risk assessment, DOD may not be able to determine what, if any, negative impact foreign encroachment may be having on its test or training ranges. DOD has not obtained sufficient information on commercial activity being conducted near test and training ranges in the level of detail officials say they need—such as if a U.S.-based entity is owned or controlled by a foreign entity—to determine if specific transactions on federally owned or managed land in proximity to ranges pose a threat to the range. Such information is generally not collected by other agencies with responsibilities for these transactions because, in some cases, legal, regulatory, or resource challenges may prevent them from collecting information that is unrelated to their agencies' missions. For example, the Federal Aviation Administration collects information about proposed structures that are more than 200 feet in height to support the agency's mission of maintaining a safe and efficient aerospace system, but does not collect information on the ownership of the companies building the structures because it is beyond the scope of its mission. DOD has identified some potential sources of information, but it has not formally collaborated with other federal agencies on how to gather this information. Leading practices state that agencies can enhance and sustain collaboration by engaging in several practices, including addressing needs and leveraging resources and agreeing on roles and responsibilities. Without engaging potential sources of information on commercial activities near its ranges, DOD is hindered in its efforts to determine if a project could present a threat to test or training range activities. GAO recommends that DOD (1) develop and implement guidance for conducting a risk assessment on foreign encroachment and (2) collaborate with other federal agencies to obtain additional information on transactions near ranges. In written comments on a draft of this report, DOD concurred with both recommendations.
In 1994–95, Mexico faced a severe financial crisis when a shift in market sentiment led to sudden large capital outflows. Investors also temporarily removed their funds from other emerging market countries, an effect known as contagion. In response to the crisis, Mexico quickly adopted a strong and ultimately successful program of adjustment and reform. To support the program, the Fund approved a loan of $17.8 billion to Mexico— one of the largest loan commitments it had ever made to a country. One of the major reasons cited for the crisis was the lack of timely, reliable, and publicly available economic, financial, and sociodemographic data for Mexico. Beginning in 1996, to correct this weakness, the Fund created data standards to guide countries in disseminating better data to the public. However, as we reported in 1997, the Fund needed to address a number of other financial, economic, and political challenges, in addition to data limitations, to better anticipate, prevent, and resolve financial crises. Before the Fund could fully address these challenges, the Asian financial crisis of 1997–98 occurred. After the Asian financial crisis, the Fund assessed the effects of its responses to the crisis and reassessed its role in safeguarding the stability of the international financial system, including rethinking its core mission, operations, and lending activities. The Fund also recognized that it needed to improve its ability to anticipate financial crises; monitor countries’ activities; and increase public awareness, particularly that of the investment community. Recognizing its inability to anticipate past crises, the Fund instituted a quarterly vulnerability assessment framework in 2001 to identify countries that may be susceptible to crisis. To improve its ability to prevent future crises, the Fund and the World Bank in 1999 began performing joint assessments of member countries’ financial sectors to help identify and monitor existing and potential weaknesses. In addition, the Fund and the World Bank began to work with countries to promote adherence to voluntary standards to reassure the international community that the countries’ policies and practices conform to standards and codes of good practice. These include standards to improve transparency in government economic data; fiscal, monetary, and financial policies; and guidelines on strengthening the financial and corporate sectors. The Fund acknowledges that it would be almost impossible to anticipate or prevent all crises. According to the Fund, past efforts to resolve financial crises during the 1990s were lengthy and very costly to debtor countries. The Fund is encouraging the adoption of agreements that would allow a quicker, more orderly, and predictable restructuring of countries’ debts. The Fund’s ultimate goal is to maintain investor confidence and stability in the international financial system. Figure 1 shows the Fund’s key initiatives for better anticipating, preventing, and resolving financial crises. Treasury through the U.S. Executive Director to the Fund has the lead responsibility for monitoring the IMF’s progress in addressing these issues. In May 2001, the IMF implemented a new vulnerability assessment framework for emerging market countries to strengthen the Fund’s ability to anticipate financial crises. This framework brings together country- specific knowledge and financial expertise within the Fund to better identify weaknesses in emerging market economies that could lead to a crisis. Although the new vulnerability assessment framework is more comprehensive than the Fund’s previous efforts, it is new and still evolving. It is too early to tell whether this new framework will successfully anticipate future crises. The new framework uses the Fund’s major forecasting tools, the WEO and the EWS, which have not performed well in anticipating prior crises. The WEO has not successfully anticipated past financial crises, and the Fund’s EWS models have had a high false alarm rate, having predicted many crises that did not occur. The forecasting of crises has been historically difficult for all forecasters due to complex underlying factors, including concerns about the reliability of important macroeconomic data on emerging market countries. The Fund has attempted for many years to identify countries vulnerable to financial crisis; however, their existing tools were insufficient to anticipate the financial crises of the 1990s and led the Fund in 2001 to develop the new vulnerability assessment framework. This comprehensive framework brings together detailed, country-specific knowledge and financial expertise of various IMF departments, including those with regional, macroeconomic, or forecasting expertise. The new framework monitors the vulnerability of key emerging market countries that borrow significantly from international capital markets. This information is provided in a quarterly report on crisis vulnerability. Fund staff report monthly on countries identified as vulnerable and provide more frequent ad hoc analyses during volatile periods. To conduct the vulnerability assessment, the Fund integrates six independent inputs that represent the analyses and perspectives of different departments of the Fund (see figure 2). World Economic Outlook: The WEO is a twice yearly publication that provides analyses of global economic developments. Through the WEO, the IMF provides current and following year forecasts for countries and regions of key economic variables such as economic growth, inflation, and current account. According to Fund staff, WEO forecasts use the best available information and represent the most realistic estimate of key economic variables, including those that could help anticipate a financial crisis. The IMF uses these forecasts as an input in the vulnerability assessment to gauge the impact of unanticipated adverse changes in the global environment. For example, the WEO forecasts for selected countries may be recalculated to examine the impact of sudden increases in oil prices or an unanticipated recession in the advanced economies. Early Warning System models: The Fund uses internal and private sector EWS models that compute the probability of a country having a crisis over the following 12 to 24 months. The model examines a series of vulnerability indicators, including whether a country’s real exchange rate is overvalued, or whether the country has significantly depleted its foreign exchange reserves. The output of the EWS models helps the Fund focus on specific areas of vulnerability. For example, if one variable, such as the exchange rate, signals a crisis, the Fund would more closely examine related components of the vulnerability assessment, such as a country’s external financing requirements. Country external financing requirements: On a quarterly basis, the Fund produces an internal assessment of a country’s ability to meet its total external debt obligations and estimates whether that country has sufficient foreign exchange to avoid a crisis. This assessment includes estimating a country’s ratio of foreign exchange reserves to short-term external debt, estimating the magnitude of its current account deficit, and considering whether and how it manages its exchange rate. Market information: On an ongoing basis, the Fund analyzes most countries’ cost of borrowing on the international market and whether the country is paying a higher interest rate than similar countries. The Fund uses this information to provide an internal analysis of the private sector’s expectations of a country’s likelihood of default on its foreign debt and to identify possible evidence that financial problems are spreading across countries. Financial sector vulnerability: The Fund assesses the strengths and weaknesses of a country’s financial sector, including the banking system. IMF staff evaluate the financial sector’s vulnerability to changes in market conditions, such as fluctuations in interest and exchange rates. Although the detailed results of these assessments are used internally by the Fund, summaries of key findings are frequently published. Country expert perspectives: IMF country experts examine the data produced by the above analyses, supplementing those results with country-specific details such as the political risks of implementing certain government policies or the relevance of certain market information. Until 1999, the Fund used the WEO as the primary forecasting tool to help identify country risks and vulnerability to crises. The new vulnerability framework, which has been in operation for about 2 years, is a more comprehensive process. According to the Fund, the quarterly integration of detailed information from country experts who continuously monitor developments in their countries is a great strength of the new vulnerability framework. Effective analysis by Fund staff of the framework’s six elements could better allow the Fund to give timely advice to authorities in vulnerable countries. It is too early to tell whether this framework will be successful in anticipating future crises. We assess the performance of the WEO and EWS models, the Fund’s primary tools for anticipating crises prior to the implementation of the new framework in May 2001. The new vulnerability assessment framework uses the Fund’s two major forecasting tools—the WEO and the EWS models—which have not performed well in anticipating prior crises. The WEO has not successfully anticipated the severe financial crises of the past decade. The Fund’s EWS models have had a high false alarm rate, having predicted many crises that did not occur. Severe financial crises are characterized by a number of negative economic outcomes, including large declines in gross domestic product (GDP), also known as recessions. We found that the WEO had a poor record of forecasting such declines in GDP, tending instead to follow existing positive economic growth trends. In addition, the IMF indicates that the current account is a key variable in explaining financial crises. We found that for the current account, the accuracy of 75 percent of WEO country forecasts was worse than simply assuming that next year’s value is the same as this year. The WEO’s difficulty forecasting both GDP and current account demonstrate poor performance in anticipating the severe financial crises of the past decade. WEO Has Performed Poorly at Forecasting Recessions In most cases, countries experiencing a financial crisis also experience a severe recession in which their GDP declines significantly. Although most recessions do not involve a major financial crisis, successful anticipation of recessions, especially the most severe ones, would greatly increase the likelihood of anticipating impending financial crisis. However, we found that the WEO has a poor track record of forecasting recessions, including those directly associated with a financial crisis. The WEO did not forecast most of the recessions that occurred in emerging market countries in the last 10 years. During the 1991–2001 forecast period, 134 recessions occurred in all 87 emerging market countries. We found that the WEO correctly forecast only 15, or 11 percent, of those recessions, while predicting an increase in GDP in the other 119 actual recessions. The WEO is considerably more likely to forecast a recession when a recession has occurred in the prior year. However, a prior year recession did not occur in two-thirds of the recessions that the WEO failed to forecast. Thus, WEO forecasts generally follow the existing growth trend within a country, making it unlikely that the WEO would correctly forecast an unanticipated recession. Furthermore, this tendency to follow the current growth trend makes it especially difficult for the WEO to anticipate a financial crisis because nearly all of the crisis-related recessions of the last decade occurred in years following positive economic growth. Further illustrating our point, the WEO was unable to anticipate large declines in GDP, also known as recessions, that corresponded to 14 major financial crises of the past decade, including the Mexican and Asian financial crises (see table 1). The WEO’s failure to identify these recessions demonstrates that it did not anticipate the corresponding financial crises. In 14 cases, we found that the WEO forecast strong economic growth, averaging a 4.4 percent increase in GDP, despite large declines in actual GDP in 13 of 14 cases. In fact, actual GDP declined by an average of 5.7 percent during the first full year of these 14 financial crises. Indonesia presents the most startling disparity, in which the WEO forecast a growth of 6.2 percent in its GDP, when in fact Indonesia’s GDP declined by almost 14 percent in the first full year of its financial crisis. WEO Does a Poor Job in Forecasting the Current Account According to the Fund, a country’s current account (primarily exports minus imports of goods and services) is a key variable in anticipating crises. Crises are associated with problems of external financing that result from a country having difficulty obtaining foreign exchange. Since exports are an important source of foreign exchange for developing countries, projections of a country’s current account balance provide information about the country’s ability to earn foreign exchange and to service its external debt. According to the Fund, an unsustainably large current account deficit can contribute to or precipitate a crisis. We found that WEO forecasts for current account were inaccurate most of the time. Our analysis for the 87 emerging market countries shows that, for more than 75 percent of the countries, the WEO current account forecasts were less accurate than if the Fund had simply assumed that the next year’s current account would be the same as this year’s. The results are even more dramatic for G-7 countries; a forecast of no change was a better predictor than the WEO forecast for six of the seven countries. This demonstrates that, even in stable economies with excellent data, the WEO has done a poor job of forecasting this key crisis anticipation variable. (See appendix III for more detailed explanation on our methodology and findings). Since 1999, the Fund has analyzed the results from internal and private sector EWS models in its crisis anticipation efforts. The Fund’s internal efforts focused on two EWS models to systematically identify countries vulnerable to crises: the Kaminsky, Lizondo, and Reinhart (KLR) model, which monitors a set of 15 monthly variables that signal a crisis whenever any cross a certain threshold; and the Developing Country Studies Division (DCSD) model, which uses five variables to compute the probability of a crisis occurring in the next 24 months. The Fund’s models use a variable- by-variable approach that allows economists to determine which variables are signaling the crisis. Internal assessments of the Fund’s EWS models show that they are weak predictors of actual crises. While the models worked reasonably well in anticipating Turkey’s recent financial crisis, they did not successfully anticipate Argentina’s financial crisis in 2002. According to the Fund, the models’ most significant limitation is that they have high false alarm rates; that is, they predict many crises that did not occur. In about 80 percent of the cases where a crisis was predicted over the next 24 months, no crisis occurred. Furthermore, in about 9 percent of the cases where no crisis was predicted, there was a crisis. Financial crises have been historically difficult to anticipate because of a number of complex underlying factors. Economic outcomes are often influenced by unanticipated events such as conflicts and natural disasters. Many factors, in addition to weaknesses in a country’s financial structure, can lead to a crisis. These include economic disturbances, such as an unanticipated drop in export prices, political events, and changes in investor sentiment leading to sudden withdrawals of foreign capital. Furthermore, data may be inadequate, particularly in developing countries where data are often not timely and are of poor quality. Forecasters consistently fail to foresee crises and recessions. Forecasts produced by private sector economic forecasters, governments, and multinational agencies including the IMF and the Organization for Economic Cooperation and Development, routinely fail to foresee the coming of crises and recessions, and often fail to outperform the naive model, which simply assumes that next year’s outcome will be the same as this year’s. This is true even for evaluations of recent U.S. forecasts of GDP and inflation. Our review of a number of forecast evaluation studies confirms that the inability to predict recessions is a common feature of growth forecasts for both industrialized and developing countries. The studies also showed that forecast accuracy improves as the time horizon gets shorter, and that there is little difference in forecast accuracy between private sector and WEO forecasts. In the late 1990s, the Fund and the World Bank began implementing two crisis prevention initiatives designed to assist four parties: IMF staff, World Bank staff, member country governments, and private sector participants. The first initiative, the financial sector assessments, provides reports on aspects of member countries’ financial sectors such as banking systems and crisis management capacity. The second, the standards initiative, assesses member countries’ adherence to 12 standards in areas such as banking supervision and economic data dissemination. Parties’ use of the information provided by these two initiatives has been mixed, and several significant challenges remain. Fund staff frequently incorporate information from the assessments into their policy advice when assessments are available; however, assessments have not been completed in some important emerging market countries primarily because participation is voluntary. Bank staff’s use of the assessments to inform country development assistance programs is also affected by gaps in the completion of some assessments and by borrower countries’ competing development demands. Member country governments sometimes use the assessments in prioritizing their reform agendas but often find the reforms too difficult to implement. Some private sector participants find the published reports untimely, outdated, and too dense to be useful in making investment decisions. The IMF and the Bank acknowledge that the initiatives cannot prevent all crises because recommended reforms require many years to be fully implemented, and crises can be caused by factors outside the scope of the reforms. In the wake of the Mexican and Asian financial crises of the 1990s, the Fund and the World Bank became increasingly aware of the importance of transparent financial data and policies, stronger financial systems, and better-functioning markets as a complement to member country governments’ sound macroeconomic policies. Fund evaluations acknowledge that the institution failed to collect information that could have enabled it to detect financial and corporate sector vulnerabilities and to provide appropriate policy advice to the affected countries before the crises occurred. In response to this, in the late 1990s, the Fund and the Bank jointly launched two initiatives to prevent the long-term likelihood of financial crises. The first initiative, the Financial Sector Assessment Program (FSAP), consists of in-depth assessments of key elements of member countries’ financial sectors. These elements include the structure of financial markets, financial systems’ response to changes in key variables such as exchange rates, legal arrangements for crisis management, and the quality of financial sector supervision. The second initiative, the Reports on the Observance of Standards and Codes (ROSC), consists of assessments of member countries’ adherence to 12 standards related to transparency in government policy making and operations, financial sector regulation, and corporate sector practices (see appendix V). Building on earlier efforts to assess transparency, in 1999, the IMF and the Bank began conducting joint assessments of observance of standards related to financial sector regulation, covering areas such as banking supervision and securities regulation. The Bank began performing assessments of standards related to corporate sector practices, including private sector accounting rules and corporate governance principles, in 2000. Some transparency, financial sector, and corporate sector standards may be assessed under the FSAP. Country participation in both initiatives is voluntary. The Fund and the Bank initially considered making participation in the ROSC assessments mandatory for member countries after determining that the Fund’s Articles of Agreement could allow such a requirement. IMF or World Bank staff lead FSAP and ROSC assessment teams, with participation by experts from national central banks and supervisory agencies and standard-setting bodies such as the Basel Committee and the International Organization of Securities Commissions (IOSCO). Before undertaking an FSAP or ROSC, the Fund and the Bank work with country governments to choose areas on which to focus. During the assessment process, FSAP and ROSC teams conduct at least one in-country visit, allowing team members to work with government officials from the finance ministry, the central bank, and regulatory bodies to collect information for the assessment. For example, an FSAP team in Russia analyzed financial information for the largest banks and single largest corporation to determine how changes in economic variables such as oil prices might affect the banking system. The South Korean ROSC team interviewed government officials at financial sector regulatory entities and private sector representatives to determine how closely regulatory practices conform to standards and to identify weaknesses that could put the financial sector at risk. The two initiatives provide information to assist four parties that play a role in crisis prevention: the Fund, the World Bank, member country governments, and private sector investors. In-depth information on member countries’ financial sectors and adherence to standards of good practice is intended to help the IMF and the Bank fulfill their missions. Fund staff identify countries’ vulnerabilities and develop appropriate advice to redress them; Bank staff identify long-term financial sector development needs and formulate relevant lending and nonlending responses. Member countries can use these assessments to help prioritize reform agendas and win domestic support for difficult policy decisions that may make their financial sectors and institutions more resistant to crisis. The Fund and the Bank often provide technical assistance to help governments build capacity to implement reforms. The financial crises of the 1990s also raised awareness of the private sector’s role in crisis prevention. Thus, the Fund and the Bank expect the assessments to help private sector participants make sounder investment decisions, thereby reducing volatility in capital markets. The use of FSAP and ROSC assessments in crisis prevention efforts is mixed, and significant limitations remain. Fund staff use the assessments, when available, as inputs for the policy advice they provide to member country governments. However, the Fund lacks crucial information about vulnerabilities to financial crisis because some major emerging market countries have not participated in the assessments. World Bank staff’s use of the assessments to inform development assistance priorities is also affected by these gaps in participation and by borrower countries’ competing development needs. Many member country governments face limitations in using assessments to make policy decisions because the reforms recommended in the assessments are often difficult to implement. Finally, some private sector participants find assessments of limited use because they are untimely, outdated, and dense. The IMF uses FSAP and ROSC assessments, when available, as inputs for the policy advice it provides member country governments. According to Fund officials, these assessments highlight issues such as weak banking supervision or high levels of debt held in foreign currency that could make countries vulnerable to crisis. The assessments also provide recommendations to address these issues. The findings and recommendations inform the discussions of policy issues that Fund officials have with member country authorities during Article IV consultations. For example, when an FSAP was performed in Mexico, Mexican authorities had begun replacing a system where the government fully insured all bank deposits with one that covers deposits up to a certain limit. The FSAP team was concerned because this reform was undertaken before Mexico had developed a well-defined framework for closing unprofitable banks. Without a clear framework for bank closures, the introduction of limited deposit insurance could damage depositor confidence in the banking system and precipitate a banking crisis. According to Fund officials, the FSAP team and Mexican authorities discussed the need to create such a framework, and a subsequent Article IV mission reviewed the government’s progress in this area. In Poland, an FSAP team discovered that Polish households and small businesses had high levels of debt held in foreign currency. The team was concerned that a depreciation of Polish currency could raise the cost of these loans and cause widespread repayment difficulties, which could in turn lead to a banking crisis. FSAP team members raised this issue with Polish central bank authorities and followed up again during the next Article IV consultation. In both countries, government officials followed the Fund’s advice and implemented reforms. The Mexican government began developing a framework for closing banks, and Poland’s central bank established a team to monitor household and small business debt. Since 1999, FSAP assessments have been conducted in more than 40 member countries and ROSC assessments in about 90 member countries. However, we found that assessments have not been completed for some major emerging market countries, limiting the Fund’s awareness of crisis vulnerabilities in certain countries. Appendix VIII contains a record of country participation in and publication of FSAP and ROSC assessments. Fund and Bank staff encourage participation in FSAP and ROSC assessments by countries whose economies have worldwide or regional implications or have known vulnerabilities to a financial crisis, but officials acknowledge that some governments have persistently resisted their efforts. According to our analysis, between 1999 and 2003, 45 percent of 33 major emerging market countries participated in an FSAP. However, the Fund has not performed FSAPs in important countries such as China and Thailand because their authorities have not agreed to participate. These gaps in participation limit the Fund’s ability to develop policy advice based on in-depth knowledge of their financial sectors. According to the Fund, the Mexican and Asian financial crises were caused, in part, by vulnerabilities in areas covered by the ROSCs. Our analysis found gaps in participation in assessments of several key standards that the Fund identifies as contributing factors to past crises (see figure 3). For example, only one-third of major emerging market countries have participated in assessments of their adherence to standards for dissemination of economic and financial data. About half have participated in the fiscal, monetary and financial policy formulation assessments and the banking supervision assessment. In addition, Fund documents point to limited progress in assessing adherence to the four World Bank-led corporate sector standards (accounting, auditing, corporate governance, and insolvency regimes), which play a key role in the effective operation of domestic and international financial systems. Less than one-third of the 33 major emerging market countries have participated in one or more assessments related to accounting and auditing. The Fund asserts that its delayed response in preventing or mitigating the Mexican and Asian crises was partially caused by insufficient information on these vulnerabilities. For example, according to the IMF, the Mexican government’s publicly available data was outdated and incomplete in 1993– 94, which contributed to significant delays in responding to the country’s excessive indebtedness. The Fund also was unaware of some Asian countries’ unsound corporate accounting practices, which contributed to the Asian financial crisis. Continued participation gaps in these assessments suggest that the Fund still lacks crucial information about some countries’ potential vulnerability to crisis. The World Bank acknowledges the importance of FSAP and ROSC assessments in formulating its financial sector development programs, but limited participation in corporate sector assessments (described earlier) affects the Bank’s ability to respond to weaknesses in borrower countries’ financial sectors. According to the Bank, country participation in corporate sector assessments has been lower than in areas related to transparency and financial sector regulation because the Bank has experienced delays in finalizing standards and methodologies for evaluating the corporate sector. For example, the methodologies for performing assessments of the accounting and auditing standards were not finalized until October 2000. Bank officials acknowledge that even when assessments are available, Bank staff do not always incorporate the issues raised as a key priority in formulating its country development assistance programs. In justifying their limited prioritization of FSAPs and ROSCs, Bank officials cited competing development demands and timing issues. First, Bank officials stated that they must balance borrower countries’ financial sector reform needs with other demands for development assistance. Most borrower country governments have multiple concerns, and Bank staff may determine that its resources will have more impact in areas other than financial sector development. Second, Bank officials cited the scheduling of the FSAP and ROSC assessments as a reason for their limited use since the timing of many assessments does not coincide with the Bank’s preparation of Country Assistance Strategies. Although member country authorities sometimes use FSAP and ROSC assessments to inform policy decisions, reforms recommended in the assessments are often difficult to implement. Some member country governments have faced obstacles to implementing reforms, including political opposition, legal constraints, and lack of technical capacity. For example, IMF officials stated that political opposition has limited the South Korean government’s progress in eliminating extrabudgetary funds, a key recommendation of the fiscal transparency ROSC. Extrabudgetary funds diminish transparency because they are exempt from rules that require scrutiny and prioritization of expenditures for most of South Korea’s budget. Fund officials cited Peru as a case where legal constraints delayed reform efforts. The FSAP and banking supervision ROSC found that protecting bank supervisors from the political pressures of the powerful bankers’ lobby would strengthen Peru’s banking supervision. According to Fund officials, existing legislation precluded awarding supervisors greater independence, and passage of new legislation was delayed. In Russia, limited technical capacity interfered with the government’s ability to implement reform recommendations. For example, the FSAP team reviewed the government’s proposal to stimulate competition in the banking sector by introducing a deposit insurance system for household deposits. However, Fund staff noted that Russia’s bank supervisory agency lacks the capacity to implement a deposit insurance system for a large number of banks. The Fund claims that private sector participants increasingly use the results of FSAPs and ROSCs to inform investment decisions and risk management. However, representatives of major international investment firms and ratings agencies we interviewed stated that the reports were untimely, outdated, and too dense to be useful. For example, several respondents indicated that delays in publishing ROSC assessments reduced their usefulness. Some private sector participants stated that ROSC reports and FSAP summaries, known as Financial System Stability Assessments (FSSAs), should be published within 6 months of performing the assessments. However, our analysis of the 58 ROSC reports published for major emerging market countries found that in one-third of the cases, 9 months or more elapsed between assessment and publication. Several private sector participants we interviewed stated that outdated ROSC reports are unreliable for decision making. The Fund acknowledges that assessments must be current for private sector participants to use them. According to Fund data, 13 countries have published an update of at least one ROSC module. However, IMF officials estimate that, of the more than 40 FSAPs performed to date, only 4 have been fully updated. Some private sector participants also stated that FSSA and ROSC reports are not clearly written. Representatives of one multinational investment bank stated that the assessments are written in a way that is difficult to understand, which limits the reports’ usefulness for investment decisions. While these interviews were limited in number and may not be representative of all private sector participants, they do provide an indication of the problems these individuals may currently have in using FSAPs and ROSCs. Fund and Bank outreach sessions and a 2002 Fund survey corroborated our findings on private sector participants’ difficulties in using ROSC assessments. The Fund reports that private sector participants place high priority on timely publication and frequent updates of ROSC assessments. For example, several participants observed that ROSC reports for Argentina had not been updated since their publication in 1999. Moreover, respondents to the Fund’s survey commented that ROSC assessments should state more clearly the deficiencies in a country’s adherence to a standard. In a March 2003 review of the standards initiative, the Fund and the Bank concluded that ROSC reports would be more useful if they stated the main findings and their significance clearly and prioritized recommendations more explicitly. The Fund and the World Bank acknowledge that FSAP and ROSC assessments cannot prevent all crises because recommended reforms may require many years to be fully implemented and because crises can be caused by factors outside the reforms’ scope. For example, Argentina participated in four ROSC assessments in 1999 to improve economic data dissemination; banking supervision; and transparency in the formulation of fiscal, monetary, and financial policies. According to senior IMF officials, the Argentine government followed many of the recommendations generated by these assessments, but their actions did not address vulnerabilities related to weak fiscal policy and a fixed exchange rate regime that contributed to Argentina’s 2001 crisis. Fund officials cite Turkey as another example of a country that made considerable progress in improving transparency and data provision based on reforms recommended by the fiscal transparency and economic data dissemination ROSC assessments of 2000–2001. However, according to the Fund, these reforms could not have prevented Turkey’s 2001 crisis, which originated with declines in its exchange rate. Fund officials assert that the current process for renegotiating the terms of member countries’ loans with external private sector creditors is lengthy and costly. In 2001, the Fund began considering the SDRM, an international legal framework that would allow a majority of a country’s external creditors to approve a restructuring of most private sector loans. The Fund is also encouraging members to include CACs in bonds, which would allow a majority of bondholders to renegotiate the terms of that bond. Although some elements of both approaches are acceptable to the private sector and governments, a number of political, legal, and technical challenges stand in the way of implementing the SDRM; it seems unlikely that these issues will be resolved in the immediate future. While private sector officials expect that many restructurings need only involve the private sector and the debtor country, under some circumstances, voluntary debt restructurings will not adequately resolve all financial crises. These officials stated that, in those cases, the Fund should provide short-term loans to eligible countries to help fill their external financing gaps. However, concerns have been raised by some financial experts and government representatives that such Fund loans have the potential to increase the probability of future crises. In response to these concerns, the Fund has clarified and strengthened its policy of lending into crisis situations. According to the Fund, countries facing severe liquidity problems often go to extraordinary lengths to avoid renegotiating or restructuring the terms of their loans. They do so because, in the past, restructuring damaged the economy and the banking system of participating countries. In some cases, even when a voluntary restructuring process is initiated, individual creditors may hold out for the best possible terms or sue in an attempt for better terms. Additionally, countries believe that creditors also may be unwilling to make future loans if they default on their existing debt. The SDRM approach is an attempt to create a more orderly, predictable, and comprehensive restructuring process and to lower the costs of restructuring for both the debtor and creditors. The approach sought to reduce the duration of the restructuring process from years to months and to provide incentives to restructure debt before default to better protect debtor and creditor interests. In the case of debtors, the Fund maintains that an orderly restructuring process could reduce the likelihood of a reduction in future capital flows. For creditors, it could provide more favorable repayment terms from the restructured debt. The SDRM is a proposed international legal framework that would allow a member country to declare its debt unsustainable and invoke a process to restructure most of its external private sector loans. A specified majority of the country’s external creditors would vote to approve the terms for restructuring, which would bind all eligible private sector creditors. The framework is designed to increase the incentives for the Fund’s member countries and their creditors to reach a rapid and collaborative agreement on restructuring unsustainable debt. A number of political, legal, and technical challenges stand in the way of implementing the SDRM, and it seems unlikely that these issues will be resolved in the immediate future. According to the Fund, successful implementation of SDRM will require overcoming certain political constraints. The SDRM could be put into practice either by countries adopting a new international treaty or by amending the Fund’s Articles of Agreement. Both options would be difficult to implement since a number of countries have indicated opposition to the SDRM. The draft framework recommended that the SDRM be created through an amendment to the Fund’s Articles of Agreement because the SDRM is closely related to the role already assigned to the Fund under the Articles in the resolution of its members’ external financial obligations. However, the Fund acknowledged that, given the opposition of some countries, changing the Articles could be difficult to achieve since it requires acceptance by three-fifths of the members, having 85 percent of the total voting representation. The United States, for example, could unilaterally veto any proposed amendment to the Fund’s Articles given its 17 percent voting representation. A key legal challenge to the implementation of SDRM is the need for most countries to change their domestic laws to conform to the requirements of any new Fund articles. Before a member country can vote to accept an amendment, it must take all the necessary steps needed under its own domestic law to ensure that the amendment will be given full force and effect under its domestic law. However, some Fund members have raised concerns over whether the domestic legal systems of some member countries could accommodate a new legal framework that applied to preexisting claims. The proposed SDRM approach also faces technical challenges. For example, the proposed framework does not specify how the claims of official bilateral creditors and some guaranteed domestic debts would be treated. The Fund is consulting with the Paris Club on how the Club’s practices may be modified to better facilitate coordination between official bilateral and the private creditors in a debt restructuring process. CACs are terms in individual bonds that permit a specified majority of sovereign bondholders to agree on a debt restructuring that would bind all holders of that bond. In June 2002, the Fund’s Executive Board endorsed the use of certain CAC provisions in new bonds and agreed to encourage member countries to incorporate CACs into their sovereign bonds in future restructurings. Inclusion of these clauses into new bonds would be voluntary. The Fund views CACs and SDRM as complementary instruments in resolving future financial crises. According to the Fund, the existence of CACs in certain bond agreements inspired the development of the SDRM framework. Although the Fund has not created its own CAC framework, it has endorsed the use of two key features from a G-10 Working Group Report and an Industry Associations draft proposal. These features include the following: Majority restructuring provision enables a qualified majority of bondholders to bind all holders of a particular bond to the financial terms of a restructuring, both before and after a default. Although majority restructuring provisions have generally been included in bonds governed by the laws of the United Kingdom or Japan, they have not been included in bonds governed by the laws of the United States or Germany. Majority enforcement provisions prevent a minority of creditors from pursuing disruptive legal action after a default and before reaching a restructuring agreement. Many international sovereign bonds governed by both U.S. and English law contain these provisions. Specifically, the Fund supports the requirement that (1) an affirmative vote of a minimum percentage of bondholders is necessary to approve claims following a default and (2) a specified majority of bondholders can reverse an approval of a claim that has already occurred. In early 2003, Mexico became the first emerging market country to issue a public, SEC-registered global bond with CACs under New York law. Previous issues under New York law by Lebanon, Qatar, and Egypt had been placed privately to institutional investors and included only a limited range of CACs. For example, the bonds issued by Egypt and Qatar included a very limited form of majority enforcement provisions, while Lebanese bonds did not contain them at all. Since Mexico’s successful issue, Brazil, South Africa, and Korea have issued bonds with CACs. Uruguay included CACs in the bonds resulting from its debt exchange. The details of the Brazil, South Africa, Korea, and Uruguay bond provisions were not available at the time we conducted our review. Some countries criticize CACs because they would only apply to new bond offerings and not existing bonds. Accordingly, during a restructuring of a country’s bond obligations, not all creditors would be bound by the CAC provisions. Borrowing countries also contend that inclusion of CACs in bond offerings could suggest to creditors that countries anticipate having difficulty repaying their loans. In response, creditors may charge a higher interest rate. However, a May 2000 academic study compared interest rates on bonds issued in the United States (where CACs are not used) with the United Kingdom (where CACs are used) and found that CACs do not contribute to higher rates in United Kingdom bonds. To date, officials from the private sector, including lenders, have expressed preference for continuing the current voluntary process, which only involves the private sector and borrowing countries, in the efforts to restructure sovereign debts. Many private sector officials we interviewed oppose the proposed SDRM approach and the Fund’s attempts to integrate CACs into new bond issues, partly because they would interfere with the normal bargaining process. They maintain that a voluntary approach to the restructurings that took place from 1998 to 2001 in Ecuador, Pakistan, Russia, and Ukraine were successful. Private sector officials assert that these and other experiences have worked well enough, and that a substantive change in current market practice is unnecessary. In contrast to the Fund’s assertion that new approaches are needed to make restructurings shorter and less expensive, private sector officials note that most recent voluntary restructurings successfully concluded in 1 year or less and that creditor holdouts or litigation did not significantly delay the restructurings. While private sector officials expect that many restructurings need only involve the private sector and the debtor country, under some circumstances, voluntary debt restructurings will not adequately resolve all financial crises. In those cases, they said the Fund should provide loans to eligible countries to help fill their external financing gaps. Such loans would assist the restructuring process and facilitate efforts at implementing necessary reforms. However, large Fund loans, such as those given during the Asian financial crises, have received substantial criticism from financial experts and government representatives, including U.S. government officials. One concern is that the possibility of receiving substantial financial assistance provides an incentive for debtor countries to adopt unsustainable economic policies to forestall needed reform. Another concern is that these large loans may encourage private sector creditors to continue providing large capital flows to countries with unsustainable economic policies because these otherwise risky investments have the potential of being “bailed out” by future Fund loans. This condition is referred to as “moral hazard.” According to these critics, efforts to help resolve existing financial crises through large Fund loans may increase the probability of future crises due to these two concerns. The Fund has advocated the SDRM framework and CACs to replace the current voluntary approach, partially in response to concerns over the potential adverse affects of its lending. To reduce the risk that Fund loans would increase the probability of future financial crises, the Fund clarified and strengthened its policy of lending in crisis situations. The Fund has clarified elements of its Lending into Arrears Policy and strengthened its criteria for requesting large short-term loans under the Supplemental Reserve Facility (SRF). Since 1997, nine countries have received loans under the two mechanisms. The Fund’s Lending into Arrears Policy permits the IMF to provide resources to countries that are unable to repay their external creditors and are thus considered in default. Conceived in the late 1980s and amended in the late 1990s, the policy is designed to protect the value of creditor assets while providing creditors with incentives to enter rapidly into restructuring negotiations with countries. The Lending into Arrears Policy increases the likelihood that a country’s private sector lenders would agree to reduce the value of their loans because Fund resources reduce short-term fiscal pressures experienced by the country while in default. A country is eligible for Fund resources while in default if the Fund determines that the debt burden is unsustainable, and the country is making satisfactory progress in implementing reforms. Additionally, the country must have demonstrated a good faith effort to reach a restructuring agreement with creditors to restore its ability to repay its debt. In 2002, the Fund clarified the criteria to be used to determine whether the debtor country is making a good faith effort. For example, the Fund would consider how quickly the debtor engaged in negotiations with its creditors after it defaulted. To date, the Fund has lent into arrears on international bonds on four occasions— Ukraine, Ecuador, Moldova, and Argentina. Introduced in 1997, the SRF provides large short-term loans to members experiencing exceptional balance of payments difficulties prior to a default. The interest rates on these loans are much higher than standard Fund loans. The increased cost of these loans is expected to reduce the probability that countries consider Fund resources a viable means for underwriting unsustainable economic policies. Higher loan terms also increase incentives for early repayment and compensate for additional repayment risks to the Fund. Countries are expected to repay SRF loans within 2 to 2½ years, but they may request extensions of up to 6 months. All SRF loans carry a substantial surcharge of 3–5 percentage points. In 2003, the Fund strengthened its criteria for providing large short-term loans under the SRF. For example, countries requesting SRF loans must provide a more extensive justification for their repayment difficulties. Additionally, the member has to demonstrate good prospects of regaining access to private capital markets within the time period that Fund resources are outstanding to minimize long-term reliance on Fund resources. To date, the Fund has provided SRF loans on nine occasions to six countries, including Korea, Russia, Brazil, Turkey, Argentina, and Uruguay (see appendix IX). In accordance with its goal of strengthening the international financial system, the Fund has undertaken a number of reforms to better anticipate, prevent, and resolve sovereign financial crises. The Fund’s new vulnerability assessment process is more comprehensive than its previous crisis anticipation efforts, but it is too soon to judge its effectiveness. The Fund’s proposed approaches to better resolve financial crises have met considerable resistance, and it is unclear whether they will ultimately be adopted. The Fund and the Bank have made progress in their crisis prevention efforts by performing assessments of member countries’ financial sectors and adherence to standards. However, the effectiveness of these crisis prevention efforts is hindered by (1) private sector participants’ limited use of published assessments, which they find untimely, outdated, and too dense to be useful and (2) gaps in crucial information about crisis vulnerabilities in some important emerging market countries due to voluntary participation in the assessments. These limitations prevent multilateral institutions, national policy makers, and private sector participants from making sound decisions, thus reducing the likelihood that these reforms will help prevent crises. To help strengthen the Fund’s crisis prevention initiatives we recommend that the Secretary of the Treasury instruct the U.S. Executive Director of the Fund to work with other Executive Board members to encourage the Fund to improve the timeliness of publication of Financial System Stability Assessments and Reports on the Observance of Standards and Codes; expand the coverage, frequency, and publication of reports on member countries’ progress on implementing assessment recommendations; improve the assessment reports’ readability, for example, by creating a standardized format in which to present executive summaries and key findings; and pursue strategies for increasing participation in the Financial Sector Assessment Program and all modules of the Reports on the Observance of Standards and Codes, including the possibility of making participation mandatory for all members of the IMF. We received written comments on this report from the Department of the Treasury, the International Monetary Fund, and the World Bank. These comments and GAO’s evaluation of them are reprinted in appendixes X, XI, and XII. The organizations also separately provided technical comments that GAO discussed with relevant officials and included in the text of the report where appropriate. Treasury agreed with the report’s recommendations. Treasury recognized that some important countries have not volunteered to participate in the FSAP and ROSC and that there should be a shorter turnaround between the completion of an assessment and its public release. Treasury also pointed out that the acceptance of collective action clauses in some recent bond offerings serves as an important signal to investors that official financing is limited and that they cannot expect to be protected from risks. The IMF broadly agreed with the report’s recommendations. However, the IMF stated that we mischaracterized the role of the WEO forecasts and EWS models in IMF crisis anticipation efforts by saying that they have a greater importance than is warranted. We disagree with this depiction. Our assessment examined all six components of the IMF’s vulnerability assessment framework, including the WEO and the EWS. As the only mature and quantifiable elements of the framework, our analysis focused more heavily on the track records of the WEO and EWS. The IMF also stated that its responsibility to maintain financial stability could make its predictions less accurate so as not to contribute to a crisis. The IMF’s comment not only validates our finding on the WEO’s weakness but also raises questions regarding the purpose and credibility of the WEO forecasts. The World Bank generally agreed with the report’s recommendations. However, the Bank expressed concern with the report’s suggestion that consideration be given to making participation in the FSAP and ROSC assessments mandatory. While we are not suggesting that the assessments should be made mandatory, the voluntary nature of the FSAP and ROSC has posed an obstacle to full participation by important emerging market countries. We are sending copies of this report to the Secretary of the Treasury, the International Monetary Fund, the World Bank, and interested congressional committees. We also will make copies available to other interested parties upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please call me at (202) 512-8979. Other GAO contacts and staff are acknowledged in appendix XIII. The Chairman of the House Financial Services and the Vice Chairman of the Joint Economic Committee requested that we assess (1) the International Monetary Fund’s (IMF’s or the Fund’s) framework for anticipating financial crises, (2) the status of key IMF initiatives to prevent financial crises, and (3) new IMF proposals to resolve future financial crises. They requested that our review cover the period after the Mexican financial crisis of 1994–95. To assess IMF’s framework for anticipating financial crises, we examined prior and new IMF mechanisms for anticipating crises. Our analysis focused on the World Economic Outlook (WEO) forecasts and the IMF’s Early Warning System (EWS) models that were the IMF’s primary forecasting tools prior to the implementation of the new vulnerability assessment framework in May 2001. Data on the performance of the other four components of the framework were not made available to us because these elements are considered part of the staff level deliberative process, and are not provided to the Executive Board. We obtained near-term data from the WEO forecasts, including real gross domestic product (GDP) growth rate and current account balance for 87 emerging market countries for the period 1990–2001 (see appendix III). We focused on the 81 middle- income countries and an additional 6 low-income countries listed by J.P. Morgan as emerging markets. To evaluate the WEO and program forecasts, we used standard econometric techniques based on methods commonly found in the forecasting literature. The formal methodology of our forecast evaluations was based on several expert publications, the replication of our summary statistics with another author’s results, and discussions with a forecasting expert. To describe the performance of the IMF’s EWS models in anticipating crises, we reviewed and summarized the results of IMF evaluations. We interviewed IMF staff, including country desk economists and staff from several departments, to discuss the IMF’s new framework for vulnerability assessment, the EWS models, and the WEO methodology. We also interviewed 23 major private sector emerging market participants to discuss whether and how they use IMF forecasts in their investment decisions. To assess the status of key IMF initiatives to prevent financial crises, we reviewed Fund and World Bank documents published between 1999 and 2003 on the creation of the Financial Sector Assessment Program (FSAP) and Reports on the Observance of Standards and Codes (ROSC) and evaluations of progress in implementing these reforms. We interviewed senior Fund officials, including staff from the Monetary Affairs and Exchange Department, the Policy Development and Review Department, and the Fiscal Affairs Department. We also met with senior advisers at the World Bank (the Bank) who oversee the Bank’s participation in the FSAP and ROSC initiatives. To gain a better understanding of how the Fund uses FSAP and ROSC assessments to inform the policy advice it provides to member country governments and challenges it faces in using these assessments, we interviewed Fund officials in nine area departments (Argentina, Brazil, Korea, Mexico, Peru, Poland, Russia, Turkey, and Uruguay). We also spoke with Fund officials in these area departments about member country governments’ use of the assessments in shaping their reform agendas and the obstacles that member country authorities encounter in implementing the reforms recommended in the assessments. To assess the extent to which emerging market countries have participated in and authorized publication of FSAP and ROSC assessments, we examined Fund and World Bank data on country participation in the FSAP and 12 ROSC modules and publication of the resulting reports between May 1999 and March 2003. To obtain views on the private sector’s use of Fund and World Bank assessments, we conducted structured interviews with 13 representatives of private sector firms, including ratings agencies, investment banks, and pension funds. We focused on 33 countries (a subset of the 87 we analyzed in the previous section) identified as the major emerging market countries by J.P. Morgan. To describe new proposals to resolve future financial crises and their potential challenges, we obtained the most current Fund documentation for the two key proposals, the Sovereign Debt Restructuring Mechanism (SDRM) and Collective Action Clauses (CACs). We examined the purpose, goals, requirements, and status of implementation. To obtain views on the private sector’s understanding of the components of the new proposals and potential implementation challenges, we conducted structured interviews with 22 representatives of private sector firms, including ratings agencies, investment banks, and pension funds. We also met with government, private sector emerging market participants, and nongovernmental officials at several conferences. We also interviewed Department of the Treasury officials and experts in international finance and law. The IMF did not meet with us on these proposals because they were still under negotiation at the time of our review. We conducted fieldwork in Washington, D.C., and in New York. We performed our work from May 2002 to May 2003 in accordance with generally accepted government auditing standards. Congress expressed concerns regarding the accuracy of the International Monetary Fund’s (IMF’s) growth rate projections and asked us to examine them. In response, we analyzed the quality of the forecasts produced by the World Economic Outlook (WEO), the Fund’s primary forecasting tool. Using econometric techniques common to forecast evaluation studies, we examined the basic measures of forecast accuracy, bias, and efficiency. This assessment supplements our finding on WEO’s efforts to anticipate crises reported earlier. We found that WEO forecasts of gross domestic product (GDP) growth and inflation perform somewhat better than an assumption that next year’s rate will be the same as this year’s (called a “naive” forecast). However, there is evidence of an optimistic bias in the forecasts of GDP growth and inflation. In addition, we found that the naive forecast of the current account generally performed better than the WEO forecast. Moreover, WEO forecasts for the major industrialized countries were superior to emerging market forecasts, and forecasts for emerging market countries that had been on an IMF program were better than for those countries that were not. The shortcomings we observed in WEO forecasting are similar to those encountered by other private sector and official forecasters. We evaluated IMF forecasts for 87 emerging market countries. Our analysis focused on WEO forecasts of GDP, inflation, and the current account. Our measures of forecast quality relied on generally accepted econometric measures of accuracy, bias, and efficiency. To evaluate the quality of IMF forecasts, we analyzed the near-term and year-ahead WEO forecasts for 87 middle-income emerging market countries. Appendix IV lists the 87 emerging market countries used in the analysis. Our analysis focused on three WEO forecast variables: (1) the growth rate of real GDP, (2) consumer price index (CPI) inflation (average over period), and (3) current account balance in billions of U.S. dollars. Our evaluation methodology is based on standard econometric techniques commonly found in the forecast literature, including the work of forecasting experts such as Stekler (1991), Artis (1996), and Loungani (2001). We also compared the quality of WEO forecasts of emerging market countries with WEO forecasts of the G-7 countries and we compared borrowers of IMF resources with those that were not. Our comparison with the G-7 countries allowed us to informally assess whether income level or data quality mattered in forecast quality. Our analysis of forecasts for program countries permitted us to assess whether WEO forecasts differ from forecasts contained within program documents, which are produced under conditions of greater staff scrutiny. We also reviewed a number of forecast evaluation studies to see how our results compared to previous reviews and to contrast IMF forecast quality with other forecasting efforts. Our analysis focused on the WEO’s near-term and year-ahead forecasts. Near-term forecasts originate from the May WEO for each year, and they project for the remainder of the existing year (approximately 6 months ahead). The year-ahead forecasts come from the October WEO of the preceding year. Thus, a near-term forecast for 2000 would come from the May 2000 WEO, and a year-ahead forecast for 2000 would come from the October 1999 WEO. We compared these WEO forecasts to the “first settled estimate,” which comes from the October WEO of the following year for which the forecast is made. Thus, we compared both the near-term and year-ahead forecasts for 2000 with the “first settled estimate” from the October 2001 WEO. Most of the econometric tools we used to assess the quality of WEO forecasts analyze the errors deriving from the forecasts. Our econometric tools examined these errors for certain qualities and patterns. We defined the forecast error, as the difference between the forecasted, and realized, value of an indicator. Hence, we have Our examination of the errors in the Fund’s forecasts focused on three measures of “goodness”: accuracy, bias, and efficiency. We performed these tests separately for the 87 countries over the 11-year forecasting period. The credibility of a forecast is established by its accuracy. The ultimate test of any forecast is whether it can predict future events accurately. Accuracy assesses whether forecasts tend, by some standard, to be close to actual outcomes. Although there is no objective standard of accuracy, comparisons to alternative forecasting methods, such as a naive model that uses historical trend data, is one way to judge relative accuracy. The accuracy measure we used is Theil’s U-statistic (U), based on the naive model that assumes this year’s growth rate will be the same as last year’s. The Theil U-statistic is based on an examination of the forecast’s root mean square error (RMSE). To compute RMSE, the forecast errors are squared and averaged over the sample to get the mean square error (MSE). RMSE is the square root of MSE. Bias determines whether forecast errors in one direction tend to be larger and/or more numerous than errors in the opposite direction. Forecast errors can be divided into two parts. One part is the “random error,” which varies unsystematically, or randomly, from one forecast to the next. The other part is the “bias error,” which remains constant for any particular forecasting procedure. Bias happens when factors other than random events influence the forecast results, resulting in an upward or downward tendency. An unbiased forecast means that forecast errors are approximately zero on average over time. However, an unbiased forecast does not guarantee that a forecast will be accurate enough to be useful if the errors are large. Efficiency examines whether a forecast has taken into account all available information. Establishing that a forecast is efficient means that no other model or readily available information would be able to improve the forecast, and there is no way to predict the direction or size of the errors. A test of efficiency makes use of the simple linear model where we regress the actual outcome on the forecast If the forecast is efficient in predicting the actual outcome, then the intercept, , should be equal to 0 and the slope, , should be equal to 1. Using the regression model defined above, we perform a joint hypothesis test to check whether these conditions hold simultaneously. The algebraically simplified version of the test statistic for this hypothesis is shown below. The reference distribution for this statistic is an F-distribution with 2 and degrees of freedom. If the p-value for this statistic is less than .05, then we reject the hypothesis at the 95 percent level of significance that both the intercept is zero and the slope is one. This means that there is only a 5 percent chance that we are making a false rejection, that is saying the forecast is not efficient when it is. Our analysis of the WEO forecasts for 87 emerging market countries shows that WEO forecasts perform somewhat better than a naive model for GDP growth and inflation, but not for current account (see table 2). We found the year-ahead WEO forecast does a better job than the naive forecast in more than 60 percent of the countries with GDP, in more than half the countries with inflation, and in about one-quarter of the countries with the current account. However, even for GDP, nearly 40 percent of the country forecasts were no better than an assumption that next year’s value is the same as this year’s. For all three variables, the shorter the forecast period, the more accurate the forecast. When the forecast time horizon shortens from 1 year to 6 months, the percentage of cases in which the WEO does a better job forecasting than the naive model increases for all three variables, exceeding 50 percent for the current account. WEO forecasts for GDP and inflation demonstrated bias in about 20 percent of the country cases. The direction of the bias was upward for GDP and downward for inflation, indicating an optimistic tendency within the WEO forecasting process. Although the bias was upward for the current account, also consistent with optimism, it occurred in only 8 percent of country forecasts. For all three variables, we could not reject the hypothesis that WEO forecasts were efficient for at least two-thirds of the country’s forecasts. However, in about one-fourth of the country cases, the WEO forecast could have been improved through the use of a different model or the addition of new information. WEO forecasts of the most developed countries are superior to its forecasts of emerging market countries when compared to the naive model forecasts. The improved forecast quality is likely due to better available data and greater stability of the wealthiest economies. Similarly, WEO forecasts for countries that borrow from the IMF are superior to those that do not. The increased scrutiny of borrowing countries by IMF staff likely contributes to the improved forecasts. WEO forecasts of GDP and inflation for the G-7 countries are considerably better than its forecasts for the emerging market countries when compared to the naive model forecasts. (see table 3). This improvement is evident across the full range of analyses. For example, in six out of seven countries, the year-ahead WEO forecasts of GDP and inflation for the G-7 countries were found to be accurate, and the near-term forecasts for GDP and inflation were accurate for all of the G-7 countries. These results are considerably better than WEO forecasts for emerging markets. In the year-ahead forecasts, bias and efficiency were a concern in two forecasts of GDP, and one current account forecast, but they were not a concern in the inflation forecasts. Although the year-ahead forecast for current account was inaccurate for six of seven countries, the near-term forecast was accurate for five of the G-7 country cases. The improved quality of WEO forecasts for the G- 7 countries is likely due to better available data and greater stability of these economies, compared to emerging market countries. We found that WEO forecasts for 57 countries that were on an IMF program (or that borrowed Fund resources under the CCFF) for any part of the forecast period tend to be more accurate than WEO forecasts for the 30 countries that were never on an IMF program during this period (see table 4). Countries that borrow from the IMF are likely to be under greater scrutiny from Fund staff than those that do not borrow, which could contribute to an improved forecast. For this analysis, we compared the Theil statistics for GDP growth, inflation, and current account for the two pooled forecasts. We found that the WEO forecasts of GDP and inflation for program countries are more accurate than those for nonprogram countries. That is, when compared to the naive model, the program countries have a lower Theil statistic than nonprogram countries. For both groups, the forecast of current account is inferior to the naive forecast (a Theil statistic greater than 1). The WEO program countries forecasts for GDP and inflation are biased, whereas the forecasts for the nonprogram countries were not. This indicates that by assuming implementation of the policies contained within the program, the Fund expects that GDP and inflation will perform better than they actually do. In addition to the publicly available WEO forecasts for all countries, the IMF also produces a set of program forecasts for countries in the years they borrow from the Fund. According to the Fund, these two forecasts should be very similar since they are prepared by the same staff in the same manner. Our comparison of program and WEO forecasts for the initial year that each country was on program confirmed that the accuracy of the two forecasts for GDP and inflation were nearly identical (see table 5). However, program forecasts of current account are substantially better than those reported in the WEO. In addition, in all cases the program forecasts were substantially more accurate than the naive model. This is further evidence that the greater scrutiny experienced by these countries while under a program probably contributes to an improved forecast. Our review of other forecast evaluations found that the shortcomings we observed in WEO forecasting are similar to difficulties encountered by other forecasters. These studies examined forecasts produced by the private sector (for example, consensus forecasts), governments, and multinational agencies including the IMF and Organization for Economic Cooperation and Development. These studies, similar to our observation in this report, found a general inability to predict recessions. In addition, consistent with our results, these studies found that (1) the shorter the time horizon, the more accurate the forecasts; (2) current account forecasts are markedly weaker than the forecasts for GDP and inflation; (3) when bias is found, forecasts tend to overestimate GDP and underestimate inflation; and (4) GDP and inflation forecasts for the industrial countries tend to be more accurate and less biased than forecasts for developing countries. While several studies found that WEO forecasts for developing countries were inferior to those generated by the naive model, one study found that WEO forecasts for developing countries did notably better than a naive forecast. A number of studies compared the quality of WEO forecasts with those produced by the private sector. Although some studies found the relative quality of the forecasts to be generally the same, a few studies found WEO forecasts to be less accurate than those of the private sector. Standard or code and rationale for adoption Transparency standards: The standards on transparency in government operations and policy making are considered within the Fund’s direct operational focus. IMF Special Data Dissemination Standard (SDDS) and General Data Dissemination Standard (GDDS). The purpose of the IMF’s SDDS is to guide member country governments that have, or might seek, access to international capital markets in publishing comprehensive, timely, accessible, and reliable economic and financial statistics. The purpose of the GDDS is to help any member country government provide more reliable economic data. The SDDS and GDDS were created in 1996 and 1997, respectively. IMF Code of Good Practices on Fiscal Transparency. This Code is intended to help member country governments improve the disclosure of information about the design and results of fiscal policy, making governments more accountable for policy implementation and strengthening credibility and public understanding of macroeconomic policies and choices. IMF Code of Good Practices on Transparency in Monetary and Financial Policies. This Code is designed to increase the effectiveness of monetary and financial policies by raising public awareness of the government’s policy goals and instruments and making governments (especially independent central banks and financial agencies) more accountable. Financial sector standards: The financial sector standards are considered within the direct operational focus of both the Fund and the World Bank and are generally assessed under the joint Fund-Bank FSAP. Basel Committee’s Core Principles for Effective Banking Supervision (BCP). The BCP is intended to guide the development of an effective system for supervising banks, a large sector of most economies. The IMF and World Bank began assessments of countries’ compliance with the BCP standard in conjunction with the Financial Sector Assessment Program (FSAP) launched in 1999. International Organization of Securities Commissions’ (IOSCO) Objectives and Principles for Securities Regulation. The IOSCO Objectives and Principles are designed to help governments establish effective systems to regulate securities which contribute strongly to investor confidence. The IMF and World Bank began using them to assess securities regulation in conjunction with the FSAP, launched in 1999. Standard or code and rationale for adoption International Association of Insurance Supervisors’ (IAIS) Insurance Core Principles. The IAIS Core Principles are designed to contribute to effective insurance supervision that supports financial stability. The IMF and World Bank began assessing member countries’ regulatory practices in this area in conjunction with the FSAP, launched in 1999. Committee on Payments and Settlements Systems’ (CPSS) Core Principles for Systemically Important Payment Systems. The CPSS Core Principles are intended to strengthen payments systems, which provide the channels through which funds are transferred among banks and other institutions. The IMF and the World Bank began assessing member countries’ observance of this standard in conjunction with the FSAP, launched in 1999. Financial Action Task Force (FATF) 40 Recommendations on Anti- Money Laundering and 8 Special Recommendations on Terrorism Financing. The FATF’s 40 Recommendations and 8 Special Recommendations are intended to promote policies that combat money laundering and terrorist financing, which threaten financial system integrity and may undermine the sound functioning of financial systems. In 2002, the IMF and World Bank agreed to perform anti-money laundering and terrorist financing assessments as a 12-month pilot program, generally in conjunction with the FSAP. Corporate sector standards: The corporate sector standards are considered important for the effective operation of domestic and international financial systems and are assessed by the World Bank. Organization for Economic Cooperation and Development’s (OECD) Principles of Corporate Governance. The OECD developed its corporate governance principles to help governments evaluate and improve their legal, institutional, and regulatory frameworks for corporate governance. The World Bank developed a template for assessing adherence to corporate governance principles based on the OECD’s Principles established in 1999. Standard or code and rationale for adoption International Accounting Standards Board’s (IASB) International Accounting Standards. The ROSC’s accounting module is intended to compare member countries’ corporate accounting practices with international accounting standards and to analyze actual accounting practice to determine the extent of compliance with applicable standards. There is special focus on the strengths and weaknesses of the institutional framework for supporting high quality accounting and financial reporting. In 2000, the World Bank developed a template for assessing adherence to accounting standards based on the IASB Standards. International Federation of Accountants’ (IFAC) International Standards on Auditing. The ROSC’s auditing module compares member countries’ auditing standards and auditors’ professional code of ethics with the standards and codes issued by IFAC. Also, the quality of actual auditing practices is evaluated. There is special focus on the strengths and weaknesses of the institutional framework for supporting high quality audit. In 2000, the World Bank developed a template for assessing adherence to auditing standards based on the IFAC’s Standards. World Bank Principles and Guidelines for Effective Insolvency and Creditor Rights Systems. In 2001, the World Bank developed draft Principles and Guidelines intended to help countries develop effective insolvency and creditor rights’ systems, two important components of financial system stability. The World Bank has conducted several assessments based on its draft Principles and Guidelines. The United Nations Commission on International Trade Law (UNCITRAL) is completing a draft Legislative Guide on Insolvency Law. UNCITRAL, Bank, and IMF staff are working toward a single standard. In response to allegations of misreporting and misuse of International Monetary Fund (IMF or the Fund) disbursements in the late 1990s, the Fund increased its efforts to protect its resources by introducing safeguards assessments, a process for evaluating the controls employed by the central banks of borrowing member countries and for recommending measures to address inadequacies. Safeguards assessments have detected numerous inadequacies that could lead to misuse of Fund resources and have recommended measures to remedy them. In 2000, the Fund introduced safeguards assessments, a process for identifying inadequacies in central banks’ ability to ensure the integrity of their operations, especially the use of Fund resources. Safeguards assessments evaluate central banks’ internal and external audit mechanisms, legal structure and independence, financial reporting procedures, and systems of internal controls. In April 2002, the Fund’s Executive Board made safeguards assessments a permanent policy. Safeguards assessments apply to all member countries with current or anticipated borrowing arrangements with the Fund. Countries with borrowing arrangements approved after June 30, 2000, are subject to a full safeguards assessment covering the five areas listed above. Countries with arrangements in effect before June 30, 2000, that have not yet repaid all Fund resources, were subject to a partial assessment covering only the external audit mechanism. Countries that do not have borrowing arrangements or have already repaid all Fund resources are not subject to safeguards assessments. According to Fund officials, since 2000 the IMF has not provided financial resources to countries that did not meet its safeguards requirements. As of December 2002, the Fund had performed 37 full safeguards assessments and 27 partial assessments, with 23 assessments under way. The completed assessments detected a number of serious vulnerabilities that could lead to misuse of central bank resources, including those borrowed from the Fund. Of the full safeguards assessments, the Fund found the following: Inadequate accounting standards in 82 percent of the central banks, which interfere with the accurate recording of central bank operations. For example, some central banks did not adhere to a financial reporting framework such as the International Accounting Standards (IAS), which would help prevent misreporting of transactions. Deficient internal audit in 79 percent of central banks, which reduces their ability to address risks of misuse and misreporting of Fund resources. For example, some internal audit departments did not audit high-risk areas such as foreign reserves management. Poor controls over foreign reserves and data reporting to the Fund in 49 percent of the central banks, increasing the possibility of misreporting and misuse of Fund resources. For example, safeguards assessments identified improper techniques for valuing foreign reserves and failure to reconcile data reported to the Fund for program monitoring purposes with underlying accounting records. According to Fund officials, when IMF staff detect significant weaknesses in the controls of assessed central banks, they recommend that the government take corrective actions. For actions that IMF staff consider essential, they may incorporate the recommendations into the list of preconditions that the Fund requires borrower countries meet before receiving IMF resources, or they may suggest that the government include the recommended actions in its official economic program. The Fund reports that of the 275 recommendations that were expected to be implemented on or before December 31, 2002, 23 percent were incorporated as conditions for IMF resources or included in official economic program statements. Fund staff monitor central banks’ implementation of recommendations by performing in-depth reviews of their audited annual financial statements and other documents every 12 to 18 months until the borrower country government has repaid all Fund resources. The Fund monitors on a continuous basis, central banks’ implementation of all other safeguards measures and of developments within the central banks that may lead to new vulnerabilities. Recently, the Fund reported that central banks have implemented 90 percent of recommendations that IMF staff included as a precondition for receiving IMF resources. According to Fund officials, the IMF stopped disbursing resources in the few cases where governments failed to implement these essential recommendations. Similarly, the Fund reported that central banks have implemented 84 percent of measures included in governments’ official economic program statements. On the other hand, the Fund reported that some recommendations made by the safeguards assessments have not been implemented as intended, although Fund officials state that these delays typically occurred in nonpriority areas. When central bank authorities fail to implement the recommendations, Fund staff increase pressure to comply, often proposing the measures’ inclusion as a precondition for the next disbursement. However, the Fund reports that staff can only adopt this approach in countries where the Fund is actively disbursing funds. For countries that are not currently receiving Fund disbursements, implementation of recommendations from the safeguards assessments tends to be more problematic because the Fund cannot exert pressure through a borrowing arrangement. Figure 4 lists all countries that have participated in Financial Sector Assessment Program (FSAP) or Reports on the Observance of Standards and Codes (ROSC) assessments and whether or not these assessments were published. The figure describes participation and publication by the 33 major emerging market countries. Countries highlighted in bold have not participated in any assessments. Figure 5 describes participation and publication by other countries (industrial, developing, and smaller emerging markets). In recent financial crises, the International Monetary Fund (IMF or the Fund) provided large short-term loans under the Supplemental Reserve Facility (SRF) with high interest rates to member countries experiencing exceptional balance of payments problems. These problems resulted from a sharp decline of investor confidence and significant outflows of capital. These loans generally were provided when the countries had exceeded their financing limit under other loan mechanisms, including the Stand-By Arrangement (SBA). In some circumstances, such as Argentina and Uruguay, the Fund provided a mix of SRF and SBA loans. Table 6 lists Fund members receiving SRF loans from 1997 to 2002. The following are GAO’s comments on the letter from the International Monetary Fund, dated June 2, 2003. capacity to implement FSAP and ROSC recommendations. However, the report points out several factors that limit the usefulness of FSAP and ROSC assessments. Our recommendation, with which the IMF agrees, is designed to improve the timeliness and coverage of these assessments. 6. We based our description of IMF safeguards assessments on the IMF’s reviews of this program. We consider this topic to be within the scope of this evaluation because the framework for conducting safeguards assessments is derived from the IMF’s Code of Good Practices on Transparency in Monetary and Financial Policies. Safeguards assessments are thus related to the standards initiative, which constitutes a central element of this report. The following are GAO’s comments on the letter from the World Bank, dated June 2, 2003. 1. The report states unambiguously that crises can stem from a number of factors, some of which are outside the scope of the FSAP and ROSC assessments. However, there is broad agreement that the roots of the Mexican and Asian financial crises lay in weaknesses in financial systems and other institutions. The IMF and the World Bank based their decision to launch the FSAP and ROSC initiatives on the premise that timely identification of financial sector and institutional vulnerabilities can contribute to crisis prevention. The IMF and the World Bank have also acknowledged that FSAP and ROSC assessments can contribute to crisis prevention efforts by helping private sector participants make better informed investment decisions. 2. Our recommendation to pursue strategies to increase participation in the FSAP and ROSC assessments, including the possibility of making these assessments mandatory, stems from the IMF’s and the World Bank’s recognition of the need to prioritize participation by important emerging market countries. Although many of these countries have volunteered to participate in these assessments, others have not. While we are not suggesting that the assessments should be made mandatory, it is evident that the voluntary nature of the FSAP and ROSC has posed an obstacle to full participation by important emerging market countries. In addition to those individuals named above, Eric Clemons, Suzanne Dove, Bruce Kutnick, Jonathan Rose, R.G. Steinman, Ian Ferguson, Mary Moutsos, Lynn Cothern, Carl Barden, David Dornisch, and Martin De Alteriis made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading.
Building on reform initiatives instituted after the Mexican financial crisis, the IMF implemented new initiatives in the mid-1990s to better anticipate, prevent, and resolve sovereign financial crises. GAO was asked to assess (1) the IMF's framework for anticipating financial crises, (2) the status of key IMF reform initiatives to prevent financial crises, and (3) new IMF proposals to resolve future financial crises. While the Fund's new vulnerability framework is more comprehensive than its previous efforts, it is too early to assess whether it will improve Fund efforts to anticipate crises. The new framework uses the Fund's major forecasting tools, the World Economic Outlook (WEO) and the Early Warning System (EWS), which have not performed well in anticipating prior crises. The forecasting of crises has been historically difficult for all forecasters. The Fund, with the World Bank, has made progress in implementing initiatives to prevent crises, but several challenges remain. To obtain better information about country financial sector weaknesses, the Fund and Bank introduced the Financial Sector Assessment Program (FSAP) to report on member countries' financial sectors and the Reports on the Observance of Standards and Codes (ROSC) to assess member countries' adherence to 12 standards. Assessments have not been completed in some major emerging market countries primarily because participation is voluntary, and use of this information has been mixed. For example, some private sector market participants have found the reports untimely, outdated, and dense. The Fund is considering two approaches to restructuring unsustainable sovereign debt; however, there are significant challenges to implementing them. One approach involves creating an international legal framework that would allow a specified majority of a country's external creditors to restructure most private sector loans. Under the second approach, the Fund is encouraging members to include renegotiation clauses in individual bonds. Many private sector representatives wish to maintain the existing process in which the Fund assists resolution by providing loans to some eligible members. In response to concerns that its resources may have unintended negative impacts during a crisis, the Fund has clarified and strengthened its criteria for lending to members experiencing crises.
The National Airlift Policy, issued in June 1987, reinforced the need for and use of the Civil Reserve Air Fleet (CRAF) program, established in 1951. The policy states that military and commercial airlift resources are equally important; that DOD should determine which resources must be moved by the military and which can be moved by commercial air carriers; and that commercial carriers will be relied upon to provide airlift capability beyond the capability of the military fleet. It also states that during peacetime, DOD requirements for passengers and/or cargo airlift augmentation shall be satisfied by the procurement of airlift from commercial air carriers participating in the CRAF program. Military airlift requirements are fulfilled by a mix of both military and civilian aircraft. Currently, the military airlift fleet is comprised of 82 C-17, 110 C-5, 468 C-130, and 69 C-141 aircraft. The older C-141 aircraft are being phased out and replaced by additional C-17 aircraft. There are also 54 KC-10 aircraft, which perform both airlift and refueling missions. The CRAF program includes 927 cargo and passenger aircraft from U.S. commercial air carriers. CRAF participants are required to respond within 24 hours of activation in the event of stage I (a regional crisis in which the Air Mobility Command’s (AMC) aircraft fleet cannot meet both deployment and other traffic requirements simultaneously) or stage II (a major war that does not warrant full national mobilization). Stage III—multiple theater wars or a national mobilization—requires that total CRAF airlift capability be made available to DOD within 48 hours of activation. Aircraft used in stages I and II are also available in subsequent stages. In the event of activation, AMC assumes mission control, but the carriers continue to operate and support the aircraft (support includes fuel, spare parts, and maintenance). Stage I was activated for the first and only time on August 17, 1990, during Operation Desert Shield. Stage II was activated on January 17, 1991, for Operation Desert Storm. The total number of aircraft committed to CRAF (see table 1) accounts for about 15 percent of all U.S.-owned commercial aircraft forecasted for 2003. Appendix I lists the carriers participating in the CRAF program as of October 2002 and the total number of aircraft each has committed through stage III. More aircraft are committed to the CRAF program than are needed to fulfill the wartime requirements established by the Mobility Requirements Study 2005 (MRS-05). There was a shortage of aeromedical evacuation aircraft, but this has been recently eliminated. Program participants stated that they would be capable of providing the needed levels of aircraft and crews within the necessary time frames, even with recent furloughs and with crewmembers that have National Guard or Reserve commitments. A new mobility requirements study could see an increase in the need for CRAF based on a change from the two major theater war scenario to the new strategy of planning for a range of military operations that was described in DOD’s recent Quadrennial Defense Review Report, issued in September 2001. Under MRS-05’s two major war scenario, the study assumed that both military and CRAF aircraft were needed and that CRAF would be required to move 20.5 million ton miles a day, or 41 percent of all military bulk cargo deliveries. CRAF would also carry 93 percent of all passengers and provide almost all aeromedical evacuation needs. In fiscal year 2002, there were only 31 of the 40 B-767s required to be available for conversion to aeromedical evacuation. However, commercial carriers increased their commitment to 46 of these aircraft for fiscal year 2003. Table 2 compares the requirements for a stage III CRAF activation with commitments by program participants. Officials from CRAF air carrier participants that we visited confirmed that they would be able to provide the agreed levels of airlift capacity within the necessary time frames and that the turmoil in the airline industry after the attacks of September 11, 2001, would not affect their ability to do so. The officials said they would also be able to provide at least four flight crews per aircraft (crewmembers must also be U.S. citizens), as they are required to do by AMC Regulation 55-8. This is in spite of the fact that some carriers have had to furlough pilots during the recent economic downturn and that employees with National Guard or Reserve commitments cannot be included in available crew lists. The same regulation requires that commercial carrier personnel with military Reserve or National Guard commitments not be considered in the cockpit crew-to-aircraft ratio. They can, however, be used in CRAF carrier work until their military units have alerted them of a recall to active duty. Officials from the carriers we visited said they monitor their crewmembers’ reserve commitments carefully and usually maintain a higher crew-to-plane ratio than DOD requires. For example, one carrier we visited operates with a crew-to-plane ratio of 10 to 1, instead of the 4 to 1 DOD requires for CRAF carriers. DOD also inspects carriers annually, and the inspectors have been satisfied that the carriers could meet the crew-to-plane ratio. The MRS-05 did not consider CRAF’s full capacity, and it set a ceiling of 20.5 million ton miles on daily CRAF airlift requirements. According to DOD officials, the study restricted CRAF cargo capacity to 20.5 million ton miles per day because DOD’s airfields can accommodate only a certain number of aircraft at the same time. Also, they stated that using additional CRAF aircraft would reduce efficiency because of the type of cargo CRAF is modeled to carry. They said that commercial aircraft can take longer to unload than military aircraft and require special material handling equipment to be available at an off-loading base. Military aircraft, on the other hand, do not need specialized loading equipment because they are high-winged and lower to the ground. Furthermore, the MRS-05 did not consider the ability of the commercial industry to carry different cargo sizes. The MRS-05 modeled CRAF aircraft carrying only bulk cargo. According to Air Force officials, the U.S. commercial cargo fleet has limited ability to carry oversized cargo and no ability to carry outsized cargo. They stated that it is difficult, from a planning perspective, to model CRAF aircraft carrying oversized cargo because the models would need to distinguish between the types of oversized cargo and the types of aircraft. They also stated that using more CRAF capacity than the 20.5 million ton mile limit would flow more bulk cargo into a theater instead of oversized and outsized unit equipment brought in by the larger military aircraft. In reality, however, commercial aircraft do carry some oversized cargo. DOD is examining how much oversized equipment can be moved by CRAF so that this capability can be included in future mobility studies. DOD’s Defense Planning Guidance, issued in August 2001, requires that mobility requirements be reevaluated by 2004, and DOD officials believe that future requirements will be higher because of the increased number of possible scenarios included in the guidance. We believe that a study that also takes into consideration excess CRAF capacity and the types of cargo that CRAF can accommodate could provide a more realistic picture of needs and capabilities. It could also mitigate some of the concerns about airfield capacity and flow of cargo into a theater if CRAF aircraft could move some of the oversized cargo. This could get the larger cargo to a unit as it was needed, instead of bulk cargo, which may not be as time-critical. One of the key stated incentives of the CRAF program—the ability to bid on peacetime government business—may be losing its effectiveness because DOD uses almost exclusively one type of aircraft, the B-747, for its peacetime cargo missions. Over 94 percent, or 892, of 946 wide-body missions flown by CRAF participants in the first 10 months of fiscal year 2002 were carried out by B-747s, which accounted for only 38 percent of wide-body cargo aircraft committed to the CRAF program. Some major CRAF participants who do not have B-747s have suggested that they might reduce or end their participation in the program if they do not receive any business in return for their commitment. This could have a serious effect on the program’s ability to meet future requirements, especially if those requirements increase due to the change in focus from two major theater wars to a range of military operations outlined in the recent Quadrennial Defense Review. Only carriers that participate in the CRAF program can bid on peacetime mobility business. Carriers can bid on a percentage of peacetime business in direct proportion to their commitment to the program. Participants earn mobilization value points, which are based on the number and type of committed aircraft. In assigning mobilization value points, DOD measures each volunteered passenger or cargo aircraft against the capacity and airspeed of a B-747-100. Participants in the aeromedical evacuation segment of CRAF receive double the mobilization value points because of the significant reconfiguration their aircraft (B-767s) must undergo. The points are used to determine how much commercial business each participant can bid on out of the total, which in fiscal year 2002 more than doubled to $1.28 billion from $572 million the previous year (see app. II for annual amounts since fiscal year 1998). Participants with 62 percent of the wide-body cargo aircraft committed to CRAF are not able to bid on most peacetime cargo business because they do not have B-747s. An AMC official said that most requests for cargo aircraft require a 90-ton capacity, the same as that of a 747-type aircraft but slightly more than those of other wide-body aircraft such as the MD-11 (86 tons) or the DC-10 (75 tons). One carrier with over 100 wide-body cargo planes smaller than B-747s committed to the program (and accounting for 41 percent of all total mobilization value points awarded to cargo carriers) received only about 4 percent of peacetime cargo business in fiscal year 2002. By contrast, a carrier committing 10 B-747 type aircraft (7 percent of total cargo points) flew 37 percent of all peacetime cargo business. AMC officials claim that they must use 90-ton capacity aircraft because they need the flexibility and capacity to clear ports as quickly as possible. The B-747 can carry more and larger cargo than other wide-body aircraft because it has more capacity and larger doors. Officials also noted that the B-747 can carry standard-sized bulk cargo pallets that are the same size as those used by commercial industry, the Defense Logistics Agency, and other DOD activities and contractors. Standard pallets also fit aboard all military cargo aircraft. In order to fit aboard other wide-body aircraft such as the DC-10 or the MD-11, cargo handlers at military bases must disassemble and rebuild the standard pallets to fit the aircrafts’ lower profile (see fig. 2). Some cargo carrier officials said they could not bid on the amount of peacetime business they believe they are entitled to based on their CRAF participation. Consequently, they indicated that unless this problem improves, they might reduce or end their participation at some point in the future. AMC officials acknowledged that the requirements from Operation Enduring Freedom, DOD’s operation in Afghanistan, amounted to the equivalent of a stage I activation. Activation was avoided because CRAF participants volunteered the airlift capability needed in fiscal year 2002. Although commitments to the CRAF program currently exceed requirements, this situation could change if some cargo carriers continue to be left out of the peacetime business and eventually decide to reduce or terminate their participation in the program. In our opinion, DOD cannot afford to lose CRAF participants, particularly in view of a new mobility requirements study and a potential increase in requirements. Furthermore, some cargo carriers stated that the CRAF B-747s are not flying with full loads and claimed that it would be less expensive to use smaller wide-body aircraft with lower per-mile costs. We obtained mission data and found that almost half of the 892 CRAF missions flown on B-747s in the first 10 months of fiscal year 2002 did not use all available space or weight capacities. These loads might have fit on smaller wide-body aircraft, which would have cost less to fly. B-747 aircraft are more expensive than other wide-body aircraft, such as the MD-11, which have lower per-mile operation costs. See table 3 for a cost comparison by plane type for a round-trip flight from Dover Air Force Base to Ramstein Air Force Base, Germany. Over 40 percent of these recent missions flown by B-747s did not utilize all the available pallet positions and carried less than 55.7 tons. In fiscal year 2002, AMC officials used the 55.7-ton mark as a breakeven point—the point at which the per-pound cost that the customer pays to have the cargo shipped equals the B-747’s per-mile cost that AMC pays the carrier to fly the mission. We were unable to determine whether a smaller, more economical aircraft could have been used for these missions because, at the time we requested the data, DOD was not obtaining data on cargo volume. However, it has since begun to accumulate this information, which will help determine whether aircraft are flying at full capacity. Military port handlers assured us that DOD’s use of B-747 aircraft during peacetime would not decrease their capability to build and load different types of pallets on other types of aircraft, which AMC data show account for 62 percent of the CRAF wide-body cargo fleet, during wartime. They stated that they “frequently” build pallets and can use available templates for nonstandard shapes. When we questioned how effectively they could do this in the very first and most urgent phases of a conflict, they stated that during wartime, supplies such as ammunition and food are delivered in pallets that can be loaded directly aboard smaller wide-body planes. According to port officials, loading aircraft is easily accomplished once the pallets are built. Another incentive for passenger air carriers to participate in the CRAF program is annual government air passenger business under the General Services Administration’s City Pairs program. General Services Administration officials said that passenger air carriers have expressed dissatisfaction because they believe the program is too restrictive and does not allow them to manage aircraft capacity to generate the highest profit. However, the 2003 contract includes some changes that program officials believe will resolve many of the carriers’ concerns. The upcoming reevaluation of mobility requirements may increase the need for CRAF in the future. However, the last study did not consider some factors—such as the ability of commercial aircraft to carry different sized cargo—that, if included, could provide more accurate and realistic requirements. The last study also set a ceiling on the amount of cargo carried by CRAF that provided the needed flow of cargo into a theater and that DOD’s infrastructure could process efficiently. This figure needs to be revalidated so that the next mobility requirements study can provide decision makers accurate and helpful information on true needs and capabilities. There are strong indications that some major program participants are dissatisfied with their share of a key CRAF incentive, the opportunity to bid on peacetime mobility business, because DOD uses almost exclusively only one type of aircraft for peacetime cargo missions. If they are unable to see some benefit from the incentive program, some participants might reduce or end their participation in the program. This could cause difficulties in meeting requirements at a time when participation in peacetime business or CRAF activation is crucial. DOD needs to study ways to expand the use of smaller wide-body aircraft to ensure an equitable distribution of the peacetime business and determine whether smaller wide-body aircraft could carry out a higher proportion of its peacetime missions as efficiently as, and possibly more economically as, the B-747 does. We recommend that the Secretary of Defense direct that the reevaluation of mobility requirements mandated by the Defense Planning Guidance include a more thorough study of CRAF capabilities, to include the types of cargo CRAF can carry and how much CRAF aircraft can land and be unloaded and serviced at military bases, and the Air Mobility Command determine whether smaller wide-body aircraft could be used as efficiently and effectively as the larger B-747-type planes to handle the peacetime cargo business that DOD uses as an incentive for CRAF participants. In written comments on a draft of this report, DOD concurred with our recommendations. However, DOD believed it would be more appropriate to ensure that ongoing study efforts be given greater emphasis and require that any resulting reports specifically address our issues. We agree that these studies could address our first recommendation concerning a more thorough study of CRAF capabilities. In a subsequent discussion, a DOD official stated that DOD intends to perform an additional study that would address the second recommendation. DOD’s comments are presented in their entirety in appendix III. We used the MRS-5, DOD regulations, and discussions with officials at the U.S. Transportation and U.S. Air Mobility Commands, located at Scott Air Force Base, Illinois, to establish the aircraft and time frame requirements for the CRAF program. We obtained and reviewed data from and interviewed officials at the U.S. Transportation Command, U.S. Air Mobility Command, Office of the Secretary of Defense, and representatives of six CRAF participants, which represent about 38 percent of the total CRAF aircraft commitment, to conclude whether the participants could respond to an activation with the required number of aircraft and crews and in the required time frame. We also interviewed representatives of six CRAF participants, representing both passenger and cargo air carriers, to determine whether the incentives used to attract and retain program participants are effective. For clarification on the incentives and how they are used, we referred to DOD regulations and interviewed officials at the U.S. Transportation Command, the U.S. Air Mobility Command, and the General Services Administration. We analyzed AMC mission data to determine the capacity at which aircraft were flying. We met with officials at the 436th Aerial Port Squadron at Dover Air Force Base to discuss cargo and aircraft loading. We conducted our review between January and October 2002 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretary of Defense, the appropriate congressional committees, and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (757) 552-8100. See appendix III for major contributors. The Department of Defense (DOD) uses commercial carriers for two different kinds of peacetime airlift moves: The first (called fixed buy) is a set contract for “channel flights” made on a regular weekly schedule from U.S. bases to fixed points across Atlantic and Pacific routes. The second (called expansion buys) includes airlift bought after the start of the fixed buy contract to support exercises, contingencies, special airlift assignment missions, and growth in channel requirements. From fiscal years 1992 through 1997, DOD contracts for commercial passenger and cargo business averaged over $611 million a year. From fiscal years 1998 through 2001, contracts increased to an average of almost $640 million a year. In fiscal year 2002, contracts increased significantly to almost $1.3 billion, which Air Mobility Command officials attributed to missions flown in support of Operation Enduring Freedom, the operation in Afghanistan. (See table 4.) In addition to those named above, Lawrence E. Dixon, Patricia Lentini, Stefano Petrucci, and Kenneth Patton made key contributions to this report.
In the event of a national emergency, the Department of Defense (DOD) can use commercial aircraft drawn from the Civil Reserve Air Fleet to augment its own airlift capabilities. The Civil Reserve Air Fleet is a fleet of aircraft owned by U.S. commercial air carriers but committed voluntarily to DOD for use during emergencies. After the terrorist attacks of September 11, 2001, many air carriers experienced financial difficulties. This sparked concern about the fleet's ability to respond, if activated, and prompted the Subcommittee to ask GAO to determine whether the fleet could respond to an activation with the required number of aircraft and crews and in the required time frame. The Subcommittee also wanted to know whether the incentives used to attract and retain participants are effective. Civil Reserve Air Fleet participants can respond to an emergency or a war with the required number of aircraft and crews and within the required time frame. Currently, there are more aircraft committed to the fleet than are needed to fill the wartime requirements identified in the DOD Mobility Requirements Study 2005, which determined the requirements to fight and win two major theater wars. However, Civil Reserve Air Fleet requirements may increase the next time mobility requirements are studied. The last mobility requirements study was limited in that it did not consider the use of excess Civil Reserve Air Fleet capacity and the ability of some commercial aircraft to carry larger cargo than standard-sized pallets. The incentives currently in place to encourage participation in the program, especially the incentive to participate in DOD's peacetime business, might be losing effectiveness and could become disincentives in the future. Some participants are not able to bid on peacetime cargo business because their fleets do not include B- 747s, the predominant aircraft DOD uses for peacetime cargo missions. GAO found that B-747s carried out 94 percent of 946 missions flown by commercial aircraft in the first 10 months of fiscal year 2002. Furthermore, over 40 percent of recent missions did not use all available space or weight limits aboard B-747s. These missions might have been carried out less expensively with smaller wide-body aircraft. Using smaller aircraft would provide more peacetime business to a greater share of program participants, thus enhancing current incentives. However, the Air Force does not have sufficient management information to identify options for selecting the best available aircraft type for the mission.
A major goal of Customs is to prevent the smuggling of drugs into the country by attempting to create an effective drug interdiction, intelligence, and investigation capability that disrupts and dismantles smuggling organizations. Although Customs inspectors have the option to conduct examinations of all persons, cargo, and conveyances entering the country, the inspectors may selectively identify for a thorough inspection those that they consider high risk for drug smuggling. This identification is generally done through the use of databases available to Customs, such as TECS. TECS is designed to be a comprehensive enforcement and communications system that enables Customs and other agencies to create or access lookout data when (1) processing persons and vehicles entering the United States; (2) communicating with other computer systems, such as the Federal Bureau of Investigation’s National Crime Information Center; and (3) storing case data and other enforcement reports. In addition to Customs, TECS has users from over 20 different federal agencies, including the Immigration and Naturalization Service; the Bureau of Alcohol, Tobacco and Firearms; the Internal Revenue Service; and the Drug Enforcement Administration. The TECS network consists of thousands of computer terminals that are located at land border crossings along the Canadian and Mexican borders; sea and air ports of entry; and the field offices of Customs’ Office of Investigations and the Bureau of Alcohol, Tobacco and Firearms. These terminals provide access to records and reports in the TECS database containing information from Customs and other Department of the Treasury and Department of Justice enforcement and investigative files. According to the TECS user manual, all TECS users (e.g., Customs inspectors and special agents) can create and query subject records, which consist of data on persons, vehicles, aircraft, vessels, businesses or organizations, firearms, and objects. According to TECS Data Standards, records should be created when the subject is deemed to be of law enforcement interest. This interest may be based on previous violations, such as drug smuggling or suspicion of violations, or subjects that are currently or potentially of investigative interest. One of the reasons for creating a TECS lookout record is to place a person or vehicle in the system for possible matching at Customs’ screening locations, such as land border ports of entry. For example, if a vehicle’s license plate that was placed on lookout for possible drug smuggling were later matched during a primary inspectionat a land border port of entry, that vehicle could be referred for additional scrutiny at a secondary inspection. Inappropriate deletions of TECS lookout records could negatively affect Customs’ ability to detect drug smuggling. Although inspectors have the option to conduct a thorough examination of all persons, cargo, and conveyances entering the country, they selectively identify for a thorough inspection only those that they consider high risk for drug smuggling. This identification is generally done through the use of databases available to Customs, such as TECS. Inspectors also rely on their training and experience to detect behavior that alerts them to potential drug violators. If lookout records have been inappropriately deleted, inspectors will have less information or less accurate information on which to make their decisions. The TECS administrative control structure consists of a series of System Control Officers (SCO) at various locations, including Customs headquarters, CMCs, and ports around the country. These SCOs are responsible for authorizing and controlling TECS usage by all of the users within the network. A national SCO has designated other SCOs at Customs headquarters for each major organization (e.g., Office of Investigations, Field Operations, and Internal Affairs) who, in turn, have designated regional SCOs who have named SCOs at each CMC and Office of Investigations field office. In some instances, SCOs have been appointed at the port of entry and Office of Investigations suboffice level. Consequently, the SCO chain is a hierarchical structure with each user assigned to a local SCO who, in turn, is assigned to a regional SCO, and so on up to the national level. One of an SCO’s primary duties is to establish User Profile Records on each user. User Profile Records identify the user by name, social security number, position, duty station, and telephone number. They also identify the social security number of the user’s supervisor, the SCO’s social security number, and the TECS applications that the user is authorized to access. SCOs at the various levels have certain system authorities they can pass on to other users. For example, the record update level is a required field in the User Profile Record that indicates the user’s authority to modify or delete records. SCOs can only assign to a user the level that they have, or a lower level. According to the TECS user manual, record update levels include the following: 1. Users can only modify or delete records they own (i.e., the user created the records or received them as a transfer from the previous owner). 2. Users can modify or delete any record within their specific Customs sublocation, such as a port of entry, thereby ignoring the ownership chain;the user does not have to be the owner of the record. 3. Users can modify or delete any record owned by anyone in their ownership chain. 4. Users can modify or delete any record in the Customs Service, thereby ignoring the ownership chain. 5. Users have a combination of levels two and three. 9. Users can modify or delete any user’s record in the database. According to Customs TECS officials, when a TECS user creates a record and enters it into the system, the user’s supervisor is automatically notified of the entry. All records must be viewed by the supervisor. The supervisor must approve the record, and the record must be linked to supporting documentation, such as a Memorandum of Information Received. According to the TECS user manual, TECS users can modify and delete records that they own, and on the basis of the record update level in their User Profile Record, may modify and delete the records of other users as follows: If the users are supervisors or SCOs with the proper record update level (three or five), they may modify and delete the records owned by users in their supervisory or SCO chain. If the users’ record update level (two, four, or five) allows, they may modify and delete the records created or owned by other users in a specific Customs sublocation, such as a port of entry. No other controls or restrictions are written in the TECS user manual or any other document that we reviewed. The Federal Managers’ Financial Integrity Act of 1982 required, among other items, that we establish internal control standards that agencies are required to follow (see 31 U.S.C. 3512). The resulting Comptroller General’s standards for internal controls in the federal government contain the criteria we used to assess Customs’ controls over the deletion of lookout records from TECS. During our review, we identified three areas of control weakness: separation of duties, documentation of transactions, and supervision. The Comptroller General’s internal control standards require that key duties and responsibilities in authorizing, processing, recording, and reviewing transactions should be separated among individuals. To reduce the risk of error, waste, or wrongful acts or to reduce the risk of their going undetected, no one individual should control all key aspects of a transaction or event. Rather, duties and responsibilities should be assigned systematically to a number of different individuals to ensure that effective checks and balances exist. Key duties include authorizing, approving, and recording transactions and reviewing or auditing transactions. Customs’ current policy authorizes a wide variety of people within and outside of an individual’s supervisory and SCO chain to individually delete the records that another individual owns without any checks and balances (e.g., concurrence by another person). This situation increases risk because, as one SCO that we interviewed told us, the more individuals—supervisors, SCOs, or others—with the required record update levels there are, the more vulnerable TECS is to having records inappropriately altered or deleted. According to the TECS user manual, supervisors, SCOs, and other users with the proper record update level may delete TECS records that they do not own. Moreover, we noticed a range in the number of individuals who were authorized to individually delete others’ records at the three CMCs and three ports we visited. For example, the Southern California CMC had 1 official—the SCO—with the authority to delete others’ records, while the Arizona CMC had 41 individuals—supervisors, SCOs, and others—with that authority. In addition, 1 of the ports we visited (Nogales) had 22 individuals with the authority to delete any record within their port without the record owner’s or anyone else’s permission. In these instances, many individuals, by virtue of their status as a supervisor or SCO or because they possessed the required record update level, were able to delete records with no checks and balances in evidence. The Comptroller General’s standards require that internal control systems and all transactions and other significant events are to be clearly documented, and that the documentation is to be readily available for examination. Documentation of transactions or other significant events should be complete and accurate and should facilitate tracing the transaction or event and related information from before it occurs, while it is in process, to after it is completed. Neither Customs policies nor the TECS user manual contained standards or guidance to require that Customs officials document reasons for the deletion of TECS lookout records. Although TECS can produce detailed information on what happened to records in the system and when it happened, there is no requirement that the person deleting the record is to describe the circumstances that called for the deletion. Thus, examiners cannot determine from the documentation whether the deletion was appropriate. The Comptroller General’s standards require that qualified and continuous supervision is to be provided to ensure that internal control objectives are achieved. This standard requires supervisors to continuously review and approve the assigned work of their staffs, including approving work at critical points to ensure that work flows as intended. A supervisor’s assignment, review, and approval of a staff’s work should result in the proper processing of transactions and events, including (1) following approved procedures and requirements; (2) detecting and eliminating errors, misunderstandings, and improper practices; and (3) discouraging wrongful acts from occurring or recurring. Customs had no requirement for supervisory review and approval of record deletions, although supervisory review and approval were required for creating TECS records. TECS officials told us that users could delete records that they own without supervisory approval. In addition, anyone with a higher record update level than the record owner, inside or outside of the owner’s supervisory and SCO chain, could also delete any owner’s record without obtaining approval. TECS lookout records can provide Customs inspectors at screening areas on the Southwest border with assistance in identifying persons and vehicles suspected of involvement in drug smuggling. Internal control weaknesses over deletions of the records may compromise the value of these tools in Customs’ anti-drug smuggling mission. Most of the CMCs and ports we reviewed had many individuals who were able to delete TECS records without any checks and balances, regardless of whether they owned the records or whether they were in an authorized supervisory or SCO chain of authority. In addition, Customs’ current policy authorizes a wide variety of people within and outside of an individual’s chain of authority the ability to delete records that other individuals created. The more people inside or outside of the supervisory or SCO chain of authority who can delete records without proper checks and balances, the more vulnerable the records are to inappropriate deletions. Although our review was limited to Customs headquarters, three CMCs, and three ports of entry, because of the lack of systemwide (1) internal control standards concerning deletion authority and (2) specific guidance concerning the deletion of TECS records that comply with the Comptroller General’s standards for internal controls, it is possible that TECS lookout records are not adequately safeguarded in other CMCs and other ports of entry as well. To better ensure that TECS lookout records are adequately safeguarded from inappropriate deletion, we recommend that the Commissioner of Customs develop and implement guidance and procedures for authorizing, recording, reviewing, and approving deletions of TECS records that conform to the Comptroller General’s standards. These procedures should include requiring supervisory review and approval of record deletions and documenting the reason for record deletions. The Treasury Under Secretary for Enforcement provided written comments on a draft of this report, and the comments are reprinted in appendix I. Overall, Treasury and Customs management generally agreed with our conclusions, and the Under Secretary said that Treasury officials also provided technical comments, which have been incorporated in the report as appropriate. Customs has begun action on our recommendation. Customs recognized that there is a systemic weakness in not requiring supervisory approval for the deletion of TECS records and not requiring an explicit reason for the deletion of these records. Customs agreed to implement the necessary checks and balances to ensure the integrity of lookout data in TECS. We are providing copies of this report to the Chairmen and Ranking Minority Members of House and Senate committees with jurisdiction over the activities of the Customs Service, the Secretary of the Treasury, the Commissioner of Customs, and other interested parties. Copies also will be made available to others upon request. The major contributors to this report are listed in appendix II. If you or your staff have any questions about the information in this report, please contact me on (202) 512-8777 or Darryl Dutton, Assistant Director, on (213) 830-1000. Brian Lipman, Site Senior The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the internal control techniques the Customs Service has in place to safeguard certain law enforcement records in the Treasury Enforcement Communications System (TECS) from being inappropriately deleted. GAO noted that: (1) Customs did not have adequate internal controls over the deletion of TECS lookout records; (2) standards issued by the Comptroller General require that: (a) key duties and responsibilities in authorizing, processing, recording, and reviewing transactions should be separated among individuals; (b) internal control systems and all transactions and other significant events should be clearly documented; and (c) supervisors should continuously review and approve the assigned work of their staffs; (3) however, guidance on TECS does not require these safeguards and Customs officials at the three ports GAO visited had not implemented these controls; (4) as a result, Customs employees could inappropriately remove lookout records from TECS; and (5) although GAO's review was limited to Customs headquarters, three Customs Management Centers, and three ports of entry, because of the lack of adequate systemwide internal control standards over deletion authority, it is possible that TECS lookout records may not be adequately safeguarded in other ports of entry as well.
Geriatric assessment, defined as the skillful gathering of information about an elderly person’s health, needs, and resources, is a potentially useful component of any program for frail elderly clients needing home and community-based long-term care. Such assessment is especially relevant to multiservice programs that pay for a wide variety of services, such as the Medicaid waiver programs found in 49 states. These programs are authorized by the Social Security Act, which allows for the waiver of certain Medicaid statutory requirements to enable states to cover home and community-based services as an alternative to client institutionalization. Such waivers, however, need not be statewide and can specifically target selected groups of individuals (for example, the elderly). The home and community-based services must be furnished in accordance with a plan of care aproved by the State Medicaid Agency. The instruments used to determine the level of care, the qualifications of those using these instruments, and the processes involved in assessment are systematically reviewed and must be approved by the administrative staff of the Medicaid program. These controls on the tools, personnel, and processes involved in establishing program eligibility are likely to benefit the care planning process. However, relatively little is known about the assessments used by waiver programs to develop care plans for the elderly, how they are used, what they cover, how they are administered, and the qualifications of those who administer them. The elderly clients who apply for home and community-based care usually undergo cycles of assessment. Depending upon each client’s assessment, the program determines the services that should be delivered to the client over a period of time, utilizing a clinical decision-making process that results in a plan of care. Care planning processes vary among and within the states, and there is no single agreed-upon way to translate the results of assessment into a care plan. However, without good care planning, even the best assessment may not be helpful in achieving the most appropriate services for clients. Starting from this plan, program personnel (or personnel contracted by the program) directly authorize appropriate services and, when services are not available through the waiver program, may provide information to the client on how those services might be obtained. As the client’s needs for services change or a specified period of time passes, program personnel reassess the needs and adjust the care plan accordingly. Each state Medicaid waiver program for the elderly has the freedom to develop and adopt its own assessment instrument with no specific federal guidelines for content or process of administration. Most of the information gathered by these instruments falls under one of six broad domains, which are recommended by experts in geriatric assessment and found in most of the published instruments developed to assess the frail elderly. They are: (1) physical health, (2) mental health, (3) functioning (problems with daily activities), (4) social resources, (5) economic resources, and (6) physical environment. To the extent that these domains are included, the instrument can be thought of as comprehensive. The completion of the assessment instrument is often based on one or more interviews between the client and the assessor. Information from other sources, such as medical records or interviews with family members, may also be included. Regardless of its formal elements, the entire assessment process must be skillfully coordinated by the assessor or assessors involved. This is necessary to maximize the useful information obtained within the limits set by the capacities of the elderly clients being served and their understandable preference to “tell their stories” as they choose. We conducted a literature review on assessment instruments; interviewed experts in geriatric assessment and state and local officials; and visited several state Medicaid programs (California, Oregon, and Florida). From the exhaustive literature review and interviews with the nationally recognized experts identified through the literature, we learned about good practices in geriatric assessment. (See appendix I for a list of experts.) From officials and visits to state programs, we learned about the goals, procedures, and difficulties of assessment in the field and gathered information to help inform our data collection. We then conducted a survey of all 50 states and the District of Columbia about their assessment instruments for the Medicaid waiver programs that provide the elderly with multiple services (in some places referred to as elderly and disabled waiver programs). We asked the head of each waiver program (or the most appropriate staff) to complete a questionnaire and send us a copy of their assessment instruments used to develop the care plans of elderly clients. The questionnaire requested two kinds of information: (1) general information about the program and (2) detailed information about the assessment instrument or instruments used to develop the clients’ care plans, the assessment and care planning processes, and training and educational requirements of the assessors. After an extensive developmental process, we pretested the questionnaire in two states and incorporated necessary changes suggested by state officials. We then mailed the questionnaire to all states and gathered information between July 1994 and January 1995. The District of Columbia and Pennsylvania indicated that they did not have Medicaid waiver programs for the elderly and, therefore, were excluded from our sample. The 49 states with Medicaid waiver programs all responded to our questionnaire. We conducted our work in accordance with generally accepted government auditing standards. All 49 states reported to us that they use an assessment instrument to determine the care plan for each client, including the identification of needed services available both through the waiver program and outside the program. In addition, 43 states use the assessment to determine an elderly person’s functional eligibility for the waiver program (level of care), and 31 states use part of the instrument as a preadmission screen for possible nursing home care. The programs rely upon several types of information to develop care plans, including client’s preference, clinical impression, assessment scores, caregiver’s preference, budgetary caps, and medical records. Most programs use the assessor’s clinical impression, based on the assessment interview, and any scores or ratings generated by the assessment process most or all of the time. (See table 1.) Forty-eight of the programs told us that they “almost always” or “most of the time” provide clients with information about providers from whom they can get services not offered by the waiver program; 45 states provide them with referrals to such services; 35 provide them with assistance in obtaining these services; and 34 of the programs follow up clients to verify that the nonwaiver services have been obtained. It should be noted that some of these nonwaiver services may also be Medicaid-funded, such as home health care provided by Medicaid. We found that although all instruments gather some information on the broad domains of physical health, mental health, and functioning, not all of them cover the other three domains of a comprehensive assessment of an elderly person (84 percent cover social resources, 69 percent cover economic resources, and 80 percent cover physical environment). Within each of the six domains, certain specific topics are covered by a number of instruments. We found that all state instruments consistently gather information on assistance with activities of daily living (for example, bathing, toileting, and dressing). Table 2 shows the relative frequency of occurrence of any coverage whatsoever for each domain and for each topic found in 10 percent or more of the instruments. This list of topics does not represent an accepted standard. Different topics within a domain may yield similar or equivalent information. There may be other topics, not listed, that can also contribute to comprehensive assessment, and for some clients, skillful probing by assessors may be needed to obtain important contextual information not listed on any assessment form. It should also be acknowledged that, in particular instances, selected topics missing from instruments do not imply that states are not informed about these topics. Such information may be available from other sources. Also, the nature of the program or characteristics of the population may make certain information less relevant. For example, the financial eligibility rules of some states may obviate the need to ask about all the topics in the economic resources domain. Such repetition of topics would make the assessment unreasonably burdensome for the clients as well as for those programs with relatively limited resources (staff, time, or money). Less comprehensive instruments should be evaluated in the context of their particular programs to determine if sufficient information is collected about the client’s physical and mental health, functional status, social and economic supports, and home environment to develop an appropriate care plan. We found that although most assessments are conducted as face-to-face interviews, only 35 percent of the instruments specify the wording of any of the interview questions that assessors ask the clients. Further, when the wording is not specified, it is often unclear in what order different elements of information are to be gathered. Instruments with specified wording, however, are usually designed to gather information in a particular order. This lack of uniformity in instrument administration may lead to unnecessary variation in how different clients perceive, and therefore respond to, requests for “the same information.” For example, some replies to questions about depression may differ depending on whether they are asked before or after questions about physical health. Also, questions about activities of daily living, such as bathing, may evoke different replies depending on whether the client is asked if he or she “can bathe” or “does bathe.” Although there may be no universally agreed-upon “correct” wording for such items, once such a wording is decided upon, there may be benefits to employing it consistently within a given program. We found that 53 percent of the programs using a single assessor mention a years-of-experience requirement, and 57 percent of the programs using a team of two assessors mention this requirement for their lead assessor (for the second assessor, it is 50 percent). Moreover, most states require assessors to possess specific professional credentials. Thus, programs attempt in various ways, such as by the adoption of hiring (or contracting) and training standards, to ensure that assessors perform their job competently. However, no particular background or training requirements can guarantee optimal assessment for all clients. We found that only 31 percent of the programs require training the assessor in how to use the instrument, although such training may be obtained without a requirement. Assessors who are not similarly trained in the use of the instrument, regardless of their credentials or other training, may not respond uniformly to common occurrences, such as a client’s fatigue or a request to clarify a question. Assessors may administer the same instrument differently, even with standardized order and wording of the questions, based on differences in clinical training or experience in other situations. In light of the observed variability in waiver program assessments—with respect to instrument content, instrument standardization, and assessor requirements—the experts we consulted and the literature in gerontology make the following suggestions for improvement: First, a number of topics, such as those listed in table 2, have proved useful in assessing the elderly. Programs that do not cover a wide variety of these can increase the comprehensiveness of their assessments by including more of these topics. Second, standardizing the wording and order of questions generally increases the comparability of the clients’ replies. Finally, another important element in achieving uniformity of instrument administration is assessor training in use of the instrument. We have drawn three conclusions about the assessment instruments and their administration. First, we found that although all states use assessment instruments to develop a care plan, there is variation in their level of comprehensiveness. Second, we found that although most assessments are conducted as face-to-face interviews, many state instruments do not have standardized wording. Third, we found that although training in the administration of the instrument may be important in achieving uniformity of administration, many states do not require such training. The Health Care Financing Administrator provided written comments on a draft of this report. (See appendix II.) The agency did not disagree with our findings, but listed some circumstances that help clarify variations across states. Specifically, they noted that waiver programs are frequently administered by different state agencies, which not only bring different perspectives to the assessments, but also use them for a variety of different purposes and may use more than one instrument. Through our state survey, we also found that some states use multiple assessment instruments, and some use them for multiple purposes. In oral comments on our draft report, responsible agency officials made some technical comments. We have incorporated these into the text where appropriate. As discussed with your office, we will be sending copies of this report to the Subcommittee Chairman, to other interested congressional committees and agencies, and to the Department of Health and Human Services and the Health Care Financing Administration. We will also send copies to others who request them. If you or your staff have any questions about this report, please call me or Sushil K. Sharma, Assistant Director, at (202) 512-3092. The major contributors to this report are listed in appendix III. Kathleen C. Buckwalter, Ph.D., University of Iowa Robert Butler, M.D., Mount Sinai Medical Center, N.Y. Donald M. Keller, Project Manager Venkareddy Chennareddy, Referencer We wish to acknowledge the assistance of R.E. Canjar in collecting and organizing the data and Richard C. Weston in ensuring data quality. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed how publicly funded programs assess the need for home and community-based long-term care services for the poor disabled elderly, focusing on the: (1) comprehensiveness of the assessment instruments; (2) uniformity of their administration; and (3) uniformity of training for staff who conduct the assessments. GAO found that: (1) all 49 states reviewed use an assessment instrument to determine the long-term care needs of the poor disabled elderly and some also use them for other eligibility determinations; (2) 48 of the programs provide information to their clients about services not covered and most give referrals and assistance to obtain those services; (3) all of the assessment instruments covered physical and mental health and functional abilities of the disabled elderly, but inclusion of their social resources, economic resources, and physical environment ranged from 69 percent to 84 percent; (4) dependence on assistance with daily living activities was the only specific topic included in all instruments; (5) most assessments use face-to-face interviews, but only a minority of them specify the wording of questions; (6) most programs have experience and professional credential requirements for their assessors, but most programs do not require standardized training; and (7) experts believe that assessment instruments could be improved by including more topics, standardizing the wording and order of questions, and training assessors in use of the instruments.
The programs discussed in this report are very diverse. The various programs we discuss were created at different times, to serve different populations, and in response to different policy issues (see box on next page). Programs also vary greatly in terms of how they are structured and funded. In addition, programs are administered through a varying combination of federal, state, and local agencies, and sometimes private organizations. Some programs require state or local agencies to contribute a share of nonfederal funds, while others are entirely federally- funded. Federal funding structures for low-income programs also vary. For instance, programs may be funded through program authorization acts (mandatory spending) or through appropriations acts (discretionary spending). Spending for these programs may be indefinite (in that there is no pre-determined ceiling and federal payments will be made for all eligible recipients for eligible expenses) or definite (in that the law limits the amount of federal spending). Tax expenditures—such as tax credits, deductions, or exclusions—are generally measured as the estimated reduction in tax revenue and are generally considered separately from other federal spending, with the exception of some refundable tax credits in which credit in excess of tax liability results in a cash refund. Examples of Low-Income Programs Established over Time 1930s- Great Depression and the New Deal: Major social insurance programs (not discussed in this report) were created to protect workers against old age and unemployment. Assisted housing programs, such as public housing, also started during this time. 1960s- The War on Poverty: Various programs were created aimed at educating low-income children, youth, and adults to help address the causes of poverty (e.g., Head Start, Job Corps, aid to help low-income students in elementary and secondary schools). The Food Stamp Program (now known as the Supplemental Nutrition Assistance Program (SNAP)), which had been a pilot program, was made permanent. Medicare (another social insurance program) and Medicaid were also established. 1970s- Welfare reform proposed, EITC created: Due to rising caseloads of recipients of Aid to Families with Dependent Children (AFDC), which provided cash assistance to low-income families, reform was proposed, but did not occur. However, major changes to other programs occurred. Aid to low-income individuals who were aged, blind, or had a disability evolved into a federally-run program: Supplemental Security Income. Section 8 rental housing assistance was established, as was the Earned Income Tax Credit (EITC). 1980s- Tax reform and promotion of work: EITC and Medicaid were expanded. The Tax Reform Act of 1986 removed federal income taxes for many of the working poor, and the Family Support Act of 1988 was passed to encourage work among AFDC recipients. 1990s- Decentralization and welfare reform: AFDC was replaced with Temporary Assistance for Needy Families (TANF), a block grant to states that emphasizes work and time-limited cash assistance and gives states wide discretion on how to use TANF funds, including for various noncash services. 2000s- Great Recession, federal stimulus, healthcare reform: In response to the recession, the American Recovery and Reinvestment Act of 2009 expanded federal spending for low-income aid, particularly for SNAP and Medicaid. The Patient Protection and Affordable Care Act expanded Medicaid eligibility (although a Supreme Court decision subsequently made Medicaid expansion an option for states) as well as established new refundable tax credits for lower-income households to subsidize their purchase of private health insurance on health insurance exchanges. The official measure used today to provide information on how many people are “in poverty” in the United States was developed in the 1960s, based on the cost of food at that time. The official poverty thresholds— the income thresholds by which households are considered to be in poverty depending on their size—are updated annually by Census to reflect current prices. HHS uses the official poverty thresholds to update the “federal poverty guidelines” each year, which are the basis for determining financial eligibility or funding distribution for certain low- income programs. The official poverty measure has not changed substantially since it was developed, and concerns about its inadequacies resulted in efforts to develop a new measure starting in 1990. For instance, the threshold for the official poverty measure (the income level that is used to determine who is “in poverty” each year) is based on three times the cost of food and does not take into account the cost of other basic necessities, such as shelter and utilities. Additionally, in determining a household’s income, the official measure considers cash income, but does not include additions to income based on the value of noncash assistance (e.g., food assistance) or reductions based on other necessary living expenses (e.g., medical expenses or taxes paid). A panel on poverty was established by the National Academy of Sciences and, later, an interagency technical working group suggested ways a new poverty measure could address some of these concerns. Based on these suggestions, Census, with support from the Bureau of Labor Statistics, developed the SPM in 2010. Each year since, Census has released annual poverty statistics on the SPM along with the official measure. The SPM did not replace the official measure, which is still used for determining federal poverty guidelines that could affect eligibility for some programs. Instead, the SPM is primarily used as a research measure, designed to provide information on economic need at the aggregate level, nationally or within subpopulations or areas. The SPM differs from the official measure in various ways. In defining a family unit that shares resources, in addition to related individuals, the SPM household includes unrelated children cared for by the family (such as foster care children) and cohabiting unmarried partners (see table 1).The SPM also defines the threshold of need differently from the official measure. Also, in determining if a family has sufficient resources to meet necessary living expenses, it looks more holistically at a family’s resources and expenses (see fig. 1). Individuals or families whose household incomes are below 100 percent of the SPM threshold are considered to be in poverty based on current levels of need. We identified 82 federal programs, including several tax expenditures, that target low-income individuals, families, and communities to help them meet basic needs or provide other assistance. For 78 of these programs, fiscal year 2013 federal obligations totaled about $742 billion. This amount includes federal obligations for two tax expenditures: the ACTC and the refundable portion of the EITC.expenditures that assisted people with low income, plus the nonrefundable portion of the EITC, totaled an estimated $14 billion in reduced federal tax revenues for fiscal year 2013. Four additional tax These programs include those sometimes referred to as “public assistance” programs or “means-tested” programs, but are broader and more diverse than those terms imply. For instance, while many of the programs, often referred to as public assistance or means-tested programs, help people with low incomes meet basic needs (income support, health care, food, housing, or utilities), some of the programs in this report provide other types of services, such as child care, services for children in foster care, or support services for older individuals. Other programs provide education assistance or employment and training support with the goal of helping disadvantaged individuals better independently support themselves. (See app. II for information from our survey on each program’s purpose and benefit or service provided.) Federal obligations for these low-income programs were concentrated in a few large programs (see fig. 2). Medicaid accounted for 39 percent of the fiscal year 2013 federal obligations for the programs we reviewed,followed by SNAP, the refundable portion of the EITC, and SSI. In total, these four programs comprised almost two-thirds (65 percent) of federal low-income obligations in fiscal year 2013 or about $480 billion. For some programs, states or other entities also contribute funding, which means billions more in nonfederal funds are spent on such programs. For example, state expenditures for Medicaid were $194 billion in fiscal year 2013, accounting for around 40 percent of total Medicaid expenditures. For TANF, state expenditures totaled almost $15 billion in fiscal year 2013, accounting for about 47 percent of total expenditures for the program. Social insurance programs, including Social Security Old-Age and Survivors Insurance (Social Security) and Medicare, are not included in the programs we reviewed because they are not targeted solely to those with low-income. These programs are generally financed by contributions from workers and employers, and eligibility for benefits is determined, at least in part, on the basis of an individual’s work history. These programs are intended to more universally protect workers from lost wages and related benefits due to retirement, disability, or a temporary period of unemployment. Some of these programs are very large. For example, in fiscal year 2013, Social Security alone totaled $674 billion in obligations, which is equal to about 90 percent of the total in obligations for the 78 low-income programs (see fig. 3). The 10 largest low-income programs in terms of federal obligations accounted for about $600 billion in fiscal year 2013 (82 percent of obligations for 78 low-income programs) and served millions of people (see table 2). However, according to our survey, while these 10—and most of the other 72 programs—collect some information on numbers served, programs varied in how they track this information, making it difficult to compare information across programs or to know precisely how many people are helped overall. (In the next section, we provide an estimate of the overall number of recipients in selected programs.) As also shown in table 2, agencies reported the number served using different units (such as individuals, households, or tax returns) and a variety of time periods (annual, monthly; fiscal, calendar, school year; cumulative or point-in-time) for each program. See appendix III for information on federal obligations, number served, and time periods for all 82 programs. In addition to the $742 billion in obligations reported in our survey, in fiscal year 2013, the federal government incurred $14 billion in reduced tax revenues for the nonrefundable portion of the EITC and four other tax expenditures, according to estimates from the Department of the Treasury (Treasury) (see table 3). These selected tax expenditures directly or indirectly serve low-income people. For instance, the EITC goes directly to low-income people by lowering their taxes based on individual tax returns filed. The Low-Income Housing Tax Credit, on the other hand, goes to housing developers who provide a certain portion of housing units for low-income people. Based on our analysis of agency responses, most low-income programs target specific sub-populations and do not serve low-income people generally. Eligibility for a benefit or service can be based on being part of a target population. Broad population groups targeted by these programs include children or families with children, the elderly, people with some earnings, and students. Programs may target multiple groups, according to our survey. For example, the Child and Adult Care Food Program supports the provision of free or reduced-priced meals and snacks to low-income children and low-income chronically impaired and elderly adults, who are in nonresidential group care settings, such as day care homes or institutions. In addition, a number of low-income programs target narrower population groups, based on agency survey responses, such as veterans, disadvantaged youth, people who are homeless, Native Americans, migrants, refugees, or rural communities. These tend to be smaller programs in terms of dollars, according to our survey. (See table 4.) Although these programs serve many different populations, relatively few target groups account for a large portion of the spending. For example, almost two-thirds of the federal expenditures for Medicaid for fiscal year 2012, the most recent detailed data available, went to people with disabilities (42 percent) and elderly individuals (21 percent), according to HHS administrative data. Additionally, a recent CRS report examined spending amounts for the 10 largest low-income programs in fiscal year 2011 (the most recent available information at the time for analysis on target groups). CRS reported that federal spending for these 10 in 2011 was $623 billion and accounted for over 80 percent of spending for low- income programs that year. According to CRS analysis, which estimated spending across target groups primarily using program data, people with disabilities received almost a third of this amount, or $208 billion (primarily from Medicaid and SSI). Working families with children received the next largest share, about $170 billion, with the refundable tax credits accounting for a large portion. The elderly received $96 billion, with a large contribution from Medicaid and the low-income Medicare subsidy for prescription drugs. Less than 12 percent of the spending in fiscal year 2011 for the 10 largest programs went to low-income adults who were not working, elderly, or had a disability, according to CRS. federal poverty guidelines (gross income minus certain exclusions and deductions, such as certain child care expenses) determine eligibility, although the income limits varied greatly among the programs and sometimes within a program. For example, to be eligible for the Community Service Employment for Older Americans program, individuals must be unemployed, age 55 or older, and have incomes no higher than 125 percent of the federal poverty guidelines. Within a program, different populations may have different limits. For instance, SNAP generally requires eligible households to have gross income no higher than 130 percent of the federal poverty guidelines, but households with members who are elderly or have a disability may have higher income limits. account) < $2,000 (for most households) In general, households must meet all three tests to be eligible for SNAP. However, the specific financial eligibility criteria may vary, depending on the circumstances. For example, some households with a member who is elderly or has a disability are subject to different requirements. Nine programs used area median income to determine eligibility. The measure is based on specified percentages of median family incomes for states and metropolitan and nonmetropolitan areas within states. For example, in the Department of Housing and Urban Development’s (HUD) Section 8 Housing Choice Vouchers program, eligible families generally must have incomes no higher than 50 percent of area median income, and 75 percent of newly available vouchers each year must go to families with incomes no higher than 30 percent of area median income. In fiscal year 2013, according to information from HUD, the median family income for states for a family of four ranged from $48,300 (Mississippi) to $88,400 (Maryland) with variation between metropolitan and nonmetropolitan areas within states. Seven programs used specific dollar amounts as a threshold to determine eligibility. For example, in general, individuals receiving SSI in 2013 had to have monthly incomes no higher than $1,505 if their countable income was only from wages, and $730 if their countable income was not from wages. The two refundable tax credits are based, in part, on earned income and adjusted gross income. For example, in tax year 2013 working families with children that had annual incomes below $37,870 to $51,567—depending on filing status and the number of dependent children—may have been eligible for the EITC. Also, childless people with earnings that had incomes below $14,340 ($19,680 for a married couple) could have received a small EITC benefit. Depending on the program, income thresholds may be adjusted annually, for inflation or other factors. Three educational programs used a needs analysis to determine eligibility: Federal Pell Grants, Federal Work Study, and Federal Supplemental Educational Opportunity Grants. This analysis calculates the amount a family can be expected to contribute toward a student’s college costs and uses that amount to determine the student’s eligibility for aid. According to budget information from the Department of (Education), about three-fourths of Pell Grant recipients in the 2012-2013 school year had annual incomes below $30,000. Seven programs allow states or localities to determine financial eligibility criteria for individuals or households, generally within certain federal limits. For instance, federal law requires that families receiving cash assistance funded by the TANF block grant must have a minor child; however, states determine financial eligibility criteria and benefit amounts, and there is a large amount of variation among states. Three programs determined financial eligibility for individuals or households in other ways not captured above, according to agency survey responses. Specifically, for the Transitional Cash and Medical Services to Refugees, eligible participants include adult refugees, asylees, and other specified groups, who meet the income and asset tests for TANF or Medicaid, but who are not categorically eligible for those programs. The tax exclusion of cash public assistance benefits is dependent on the receipt of aid from public cash assistance programs. The Work Opportunity Tax Credit provides a tax credit to employers who hire people from certain specified disadvantaged groups, including certain recipients of SNAP, SSI, and TANF, among others. HHS publishes a compilation of state TANF policies and updates it each year. See HHS, Welfare Rules Databook: State TANF Policies as of July 2013, OPRE Report 2014-52 (Washington, D.C.: September 2014). Thirty-three programs target assistance to low-income communities, groups, or other entities, rather than individuals or households, based on agency survey responses. Twenty-five of these programs targeted or prioritized services to low- income groups, generally based on a measure of low-income. However, these programs may also serve people more broadly and not only those who are low-income. For example, funds for the Education for the Disadvantaged – Grants to Local Educational Agencies (Title I, Part A) program are allocated to school attendance areas and schools based on the number of children from low-income families. Depending on the percentage of low-income students in a school, schools funded by this program may serve all students, or must focus services on low-achieving students in the school. Eight programs that do not have a measure of low or limited income are included as low-income programs because they targeted special populations who tend to be disproportionately low-income or are presumed to be low-income (e.g., Native Americans or homeless individuals and families). (See app. IV for information on all programs by type of financial eligibility.) Among all of the programs identified, 11 provide for automatic eligibility (also referred to as categorical eligibility), according to our survey. Although specific eligibility requirements may vary, some programs allow automatic eligibility for people who have already qualified for another, specified income-tested program, or if they are a member of a specified target population. (See table 5 for a summary of our survey results.) In prior work, we have looked at automatic eligibility and similar provisions for programs, including SNAP, WIC, and the school meals programs. For example, in 2012 we looked at the prevalence of households receiving SNAP under expanded automatic eligibility rules, called “broad- based categorical eligibility.” Under these rules, states can allow households receiving noncash services funded by TANF (such as a toll- free number or brochure) to be automatically eligible for SNAP. States that adopt a broad-based categorical eligibility policy may increase limits on household income to up to 200 percent of federal poverty guidelines, and remove limits on assets for these households. In that report, we found that a relatively small percentage of households in 2010 were eligible for SNAP under broad-based categorical eligibility that would not have otherwise been eligible (under 3 percent). We also found that these households’ incomes were modestly higher (around 150 percent of federal poverty guidelines, instead of 130 percent). In addition to eligibility requirements related to income or target population, some programs impose work requirements (participants must be engaged in work or work-related activity in order to receive benefits or services) or time limits (program participation is limited to a specified period of time), although most do not, according to our analysis of agency survey responses. For three programs—TANF, SNAP, and Transitional Cash and Medical Assistance for Refugees—agencies reported both work requirements and time limits for at least a portion of program recipients, as follows: TANF requires states to engage a certain percentage of families with a work-eligible individual receiving cash assistance in specified work- related activities (such as job search and job readiness assistance) or face potential financial penalties. In general, TANF also limits federally-funded assistance for families with an adult member to 5 years. States may extend families beyond this 60-month period for reasons of hardship for up to 20 percent of their caseloads. Unless otherwise exempt, SNAP requires participants who are mentally and physically able to work and between the ages of 16 and 59 to work at least 30 hours per week, register for work, or participate in an employment and training program if assigned by the state SNAP agency. Additionally, able-bodied adults between the ages of 18 and 49 without dependents are limited to 3 months of SNAP benefits in a 36-month period, unless they work or participate in a work program for at least 20 hours per week. A large portion of SNAP participants are not, however, subject to these requirements. Many participants are exempt from the program’s work requirements because of age or disability. Also, the Department of Agriculture (USDA) has granted waivers to many states from the 3-month time limit in recent years due to low numbers of available jobs. Cash assistance under the Transitional Cash and Medical Services for Refugees Program is conditioned on the refugee registering with an employment agency or service, participating in available job training services, and accepting appropriate offers of employment. Both cash assistance and medical assistance are limited to 8 months, although other types of assistance for refugees may be available for a longer period of time, as described below. For prior work on refugees’ employment outcomes, see GAO, Refugee Assistance: Little Is Known about the Effectiveness of Different Approaches for Improving Refugees’ Employment Outcomes, GAO-11-369 (Washington, D.C.: March 31, 2011). For the purposes of this analysis, we excluded a few programs in which the agency responded that the program had a work requirement, but the program purpose or the program benefit or service was to provide some sort of employment opportunity, such as Federal Work Study. Our purpose was to include programs that in effect required a recipient to work or prepare for work in exchange for benefits or services not directly linked to work, such as food assistance, housing assistance, or supplemental income. Service Employment for Older Americans) specify a maximum length of time for receipt of assistance. Under two housing programs, there are time limits for providing temporary shelter (Homeless Assistance Grants and Housing Opportunities for Persons with AIDS). Also, refugees may receive various services, such as social adjustment services or citizenship and naturalization services, for up to 5 years under the Social Services and Targeted Assistance for Refugees Program. As a whole, the administration of these programs is complex and involves many different agencies and entities at the federal, state, and local levels. Thirteen federal agencies administer the 82 programs, with three-quarters of them overseen by HHS, HUD, Education, and USDA. A relatively small number of programs are entirely or mostly federally run (that is, these programs are direct benefits provided by federal agencies or are tax expenditures administered through the federal income tax system). These include some of the largest programs, such as SSI, the refundable tax credits, and Federal Pell Grants. For many other programs, various state and local agencies, and in some cases private entities, are involved in program administration and the provision of benefits and services. Additionally, at least 12 different congressional committees are responsible for program oversight. Based on this report and a review of our prior work, we identified several issues that pose difficulties for administering and overseeing this complex system of programs as well as efforts to address them. These issues are based on our prior reviews of specific low-income program areas and on our broader government-wide work. More specifically: In a 2011 testimony, we summarized our work that found the array of human services programs was too fragmented and overly complex— for clients to navigate, for program operators to administer efficiently, and for program managers and policymakers to assess program performance.longstanding challenges, such as simplifying and streamlining policies We identified potential approaches to address these and processes across programs, improving technology,fostering innovation and evaluation to improve services and reduce costs. In our government-wide work on fragmentation, overlap, and duplication, we have recommended that certain agencies responsible for low-income program areas take actions, such as increased collaboration with other agencies and additional study, to help minimize administrative inefficiencies among multiple programs. Some of these recommendations have been addressed. See the box on page 36 for more information on our open recommendations in relevant areas. In our work on the role of evaluation in federal programs, we found that evaluations can help program administrators and policymakers understand what programs and practices are working and how to improve the use of scarce resources, yet federal agencies often do not evaluate their programs. For this report, we reviewed the efforts of federal agencies responsible for five of the largest programs— SNAP, SSI, TANF, EITC, and the Section 8 Housing Choice Vouchers program—to conduct or sponsor recent evaluations regarding participant outcomes. We found that for the four spending programs, agencies were engaged in recent evaluation efforts that focused on participant outcomes, including employment and self-sufficiency, food security, and family outcomes. Unlike the four spending programs we examined, Treasury officials said the agency does not conduct program evaluations related to program or policy outcomes on the EITC or any other tax expenditure. (See app. V.) In our previous reports on tax expenditures, we concluded that because tax expenditures are not evaluated for performance, it is difficult to evaluate their costs and benefits and the extent to which they meet intended policy goals. We have recommended that the Office of Management and Budget (OMB) set up a performance evaluation framework for tax expenditures. This recommendation has not been addressed. In a 2014 report assessing aspects of the GPRA Modernization Act of 2010, we concluded that the act’s requirement for OMB to publish on a central website a list (inventory) of all federal programs along with related budget and performance information would be useful for better government management. Such information could help decision makers determine the scope of the federal government’s involvement, investment, and performance in a particular area, as well as provide critical information that could be used to better address crosscutting issues, among other purposes. We recommended that OMB take several actions to improve the existing program inventory information to make it more useful for decision makers, such as including tax expenditures in the inventory and directing agencies to collaborate when defining and identifying programs that contribute to a common outcome. OMB generally agreed with most of these recommendations, but has not yet addressed them. GAO is statutorily mandated to identify and report annually to Congress on federal programs, agencies, offices, and initiatives—either within departments or government-wide—that have duplicative goals or activities. "Fragmentation" refers to those circumstances in which more than one federal agency (or more than one organization within an agency) is involved in the same broad area of national need and there may be opportunities to improve how the government delivers these services. "Overlap" occurs when multiple agencies or programs have similar goals, engage in similar activities or strategies to achieve them, or target similar beneficiaries. "Duplication" occurs when two or more agencies or programs are engaged in the same activities or provide the same services to the same beneficiaries. In recent years, GAO has identified fragmentation, overlap, and duplication among some of the low-income programs reviewed in this report. See below for the areas identified, the focus of recommendations, and whether the recommended actions have been completely, partially, or not addressed. We also include the year the program area was first identified by GAO for fragmentation, overlap, or duplication. This information was last updated March 6, 2015. Training, Employment, and Education: Early Learning and Child Care Greater coordination efforts across early learning and child care programs could mitigate the effects of program fragmentation, simplify children’s access to these services, collect the data necessary to coordinate operation of these programs, and identify and minimize any unwarranted overlap and potential duplication. Identified 2012; addressed Training, Employment, and Education: Employment and Training Programs Providing information on colocating services and consolidating administrative structures could promote efficiencies. Identified 2011; addressed Social Services: Domestic Food Assistance Multiple actions could reduce administrative overlap among domestic food assistance programs. Identified 2011; partially addressed Social Services: Housing Assistance Examining the benefits and costs of housing programs and tax expenditures that address the same or similar populations or areas, and potentially consolidating them, could help mitigate overlap and fragmentation and decrease costs. Identified 2012; not addressed or consolidated. Social Services: Homelessness Programs: Better coordination of federal homelessness programs could minimize fragmentation and overlap. Identified 2011; addressed. In 2013, 48.7 million people in the United States (15.5 percent of the population) lived in poverty according to the SPM, based on our analysis of Census data (see fig. 4). These people lived in households with incomes below the SPM poverty threshold, which measures whether they have sufficient resources to meet their basic needs, after taking into account government benefits and necessary expenses. The SPM threshold in 2013 for two adults and two children ranged from $21,397 to $25,639, depending on their housing situation, according to Census. In 2013, the SPM poverty rate was slightly higher than the official measure’s poverty rate of almost 15 percent. Compared with the official measure, the SPM showed more people with incomes in the 50 to 199 percent range and fewer people with incomes in the lowest and highest groups (see fig. 5). Various factors account for the differences in distribution. For instance, unlike the official measure, SPM includes the value of certain noncash benefits and tax credits, which would increase household income. On the other hand, the SPM subtracts necessary living expenses, such as taxes paid, medical costs, or work expenses, which would reduce household income. The SPM also includes cohabitors (unmarried partners), who could affect income by bringing additional earnings and expenses into the household. Moreover, the poverty thresholds used by each measure—the income level necessary to avoid poverty—are different, so the same household could be considered below poverty under the SPM and above poverty under the official measure. Also, while Census data show that both measures had similar trends over time—with overall poverty rates falling slightly from —the poverty rates of sub-populations varied more. For 2010 to 2013example, under the SPM children had a lower rate of poverty and elderly individuals had a higher rate in 2013, compared to the official measure. Our analysis provides a point-in-time perspective and does not depict variation in people’s economic circumstances during the year or over multiple years, which may move households in and out of poverty. For instance, we looked at annual income and expenses for 2013, but household incomes may have fluctuated within that year. A 2014 Census report estimated that from 2009 through 2011, almost one-third of the population experienced poverty (based on the official measure) for at least 2 months; however, over 40 percent of these periods of poverty ended within 4 months. Additionally, poverty rates in 2013 may reflect some of the longer-term effects of the recent recession; more current data could reflect improved economic conditions. Poverty rates also vary among the states. For example, the SPM poverty rate ranged from a low of 8.7 percent (Iowa) to a high of 23.4 percent (California), using a 3-year average over 2011, 2012, and 2013 (see fig. 6). Individuals below the SPM poverty threshold lived in a variety of types of households, according to our analysis of household types using Census data (see fig. 7). We found that the highest rates of poverty (SPM) were among single parent households (30 percent) and households headed by However, the largest numbers of a person with a disability (29 percent).people below the SPM poverty line were in other types of households. About half were in households without children (14.3 million), or married households with children (10.4 million). This is in part because these two groups are the largest among the overall population. For this analysis, we categorized households into six mutually-exclusive types, as follows. Headed by elderly persons: Households (with or without children) headed by a person who is 65 or over, regardless of whether he or she has a disability. The head of household may live alone, with a spouse, or with a cohabiting partner. Headed by persons with disabilities: Households (with or without children) headed by a person under 65 with a disability. The head of household may live alone, with a spouse, or with a cohabiting partner. We used a Census Bureau definition of disability, which includes any serious difficulty hearing, seeing, concentrating/remembering/making decisions, walking/climbing stairs, dressing/bathing, or doing errands alone. Without children: Households without children headed by a person under 65 without a disability. The head of household may live alone, with a spouse, or with a cohabiting partner. Married with children: Households with at least one child headed by a married person under 65 who does not have a disability. Cohabiting with children: Households with at least one child headed by an unmarried person under 65 who has a cohabiting partner and does not have a disability. Single parent: Households with at least one child headed by an unmarried person under 65 who does not have a disability or a cohabiting partner. Households headed by a person who is elderly or has a disability may have children, but are not counted as a household with children for this analysis. According to our estimates, 7.2 percent (+/-0.5) of all children in the United States in 2013 were in these two household types. We relied on the U.S. Census Bureau’s Current Population Survey, Annual Social and Economic Supplement data to determine whether a household fell into a particular category. Because program definitions and eligibility requirements vary, these categories may not be used to determine eligibility for programs. Most people in poverty (SPM) lived in households with at least some earnings. About 31 million people, or almost two-thirds of those with incomes below the SPM threshold, were in households with earnings— defined as having at least one member who earned any income at some point during the year. Another 19 percent were in households without earnings in which the household head was elderly or had a disability. Of the remaining 19 percent without earnings, about half were in childless households. Poverty rates were much higher for those who did not work or worked less during the year. Figure 8 shows that among households headed by someone who was not elderly and did not have a disability, households without earnings experienced much higher poverty rates than those with earnings (62 percent versus 12 percent). Also, over one-third of those without earnings had incomes below 50 percent of the SPM threshold. Our data do not distinguish the amount of time people worked. However, Census analysis of SPM data for people aged 18 to 64 who worked at least 1 week in 2013 shows that the poverty rate (SPM) among people who worked full-time year round was 5.4 percent (nearly 5.5 million people), but was 19.6 percent (nearly 8.9 million people) among those who worked less than that amount of time. An estimated 106 million people, or about one-third of the U.S. population, received benefits from at least one of eight selected federal low-income programs at some point during 2012 (see fig. 9). This is based on our analyses of the most recent TRIM3 microsimulation data for these programs: ACTC, EITC, housing assistance, LIHEAP, SNAP, SSI, TANF cash assistance, and WIC.perspective from national survey data, which often underreport the The results provide a different number of low-income program recipients.allow for unduplicated counts of the total number of people receiving aid from more than one program, which is often not possible when using data from individual programs. Some programs’ administrative data (e.g., federal agency data we reviewed for SNAP, TANF, and WIC) include the number of people served each month, but do not track an unduplicated count of recipients for the year. The data for low-income programs also count recipients in different ways (e.g., individuals, households, families, tax filing units), making it difficult to compare receipt of assistance consistently across multiple programs. For many of these reasons, the results of our analysis in this section will differ from program information based on administrative data. Almost two-thirds of the recipients of the eight programs combined were in households with children, including married, cohabiting, and single parent households (see table 6). These households also received 58 percent of the nearly $241 billion in benefits provided by these eight An programs combined in 2012, according to our TRIM3 analysis.estimated 81 percent of recipients lived in households with at least some annual earnings and received an estimated two-thirds of the combined benefit spending. In total, an estimated 25.4 million people moved above the SPM poverty threshold due to combined benefits from the eight programs. An additional 13.4 million who did not cross over the SPM threshold moved out of the lowest income group (below 50 percent of poverty). Moreover, 10 million who were already above the SPM threshold moved to a higher income group (e.g., moved from 100 to 149 percent of poverty to 150 to 199 percent of poverty). To obtain these estimates, we subtracted the value of these benefits from beneficiaries’ incomes and recalculated their incomes as a percent of the SPM threshold. Overall, fewer people were in the lowest income groups (those below poverty) when the value of benefits from the eight programs was included (see fig. 10). Program effects varied by household type as well (see fig. 11). The largest numbers of people avoiding poverty based on the SPM because of selected federal benefits were in households with married parents (9.2 million) or single parents (7.9 million). Over one-third of program recipients living in single parent households were kept out of poverty by the combined benefits of the eight selected programs. Each of the eight programs lifted a number of recipients above the SPM threshold, ranging from 340,000 (LIHEAP) to nearly 8.7 million (SNAP) (see fig. 12). Variation in programs’ effects on reducing poverty was due to a combination of factors, including the number of recipients in each program and value of each benefit. For instance, SNAP and EITC served the most people in 2012 and, accordingly, had large effects on moving people out of poverty among our eight programs. Housing assistance, on the other hand, served many fewer people but provided a higher dollar amount of benefits than most other programs, moving almost 37 percent of all housing recipients that year out of poverty. Our estimates are consistent with Census analyses using the SPM to measure the effects of program benefits on poverty. Census found that refundable tax credits (EITC and ACTC combined, along with other refundable federal and state tax credits) and SNAP had the largest effect on reducing poverty for the population in 2012. Of the different age groups (children, adults, and the elderly), Census found that children benefited the most from low-income programs, particularly from the refundable tax credits. Census also looked at the effects of several social insurance programs and reported that Social Security had, by far, the biggest effect on reducing poverty for the population—more than any low- income program—especially among the elderly. While each of the programs’ benefits moved some individuals above the SPM threshold, the income status of each programs’ recipients’ still varied from 50 percent below poverty to more than twice the SPM poverty threshold after taking into account the program’s benefits and other benefits received (see fig. 13). Figure 13 shows that, for example, 62 percent of individuals who were eligible for and received SNAP benefits for at least one month in 2012 had annual incomes above the SPM threshold, after including the value of SNAP and other benefits received, which may have included other low-income benefits such as TANF or the EITC as well as other benefits such as Social Security or unemployment insurance. Some variation among the programs in terms of recipients’ incomes as a percentage of the SPM reflects differences in program targeting and design. For instance, the tax credits, ACTC and EITC had larger percentages of recipients above the SPM threshold (82 percent and 75 percent, respectively), as would be expected since these credits are designed to phase out gradually over higher levels of earned income. Under the EITC, for example, certain married families with two qualifying children may have had nearly $50,000 in earned income in 2013 before they became completely ineligible for the credit. A majority of ACTC and EITC recipients also lived in households with two adults (married or cohabiting) and children, as we will discuss later. In contrast, TANF cash assistance had the smallest percentage of people above the SPM poverty threshold among our selected programs (57 percent). Generally, TANF recipients must have very low incomes to qualify for benefits. In addition, the amount of aid from TANF programs tends to be relatively small, although TANF recipients often receive assistance from other programs, particularly SNAP.TANF recipients lived in single parent households and did not have income from another individual for support. The receipt of benefits from means-tested low-income programs (i.e., those with financial eligibility tests for individuals or families) may affect an individual’s willingness to seek and accept employment in two key ways. One is the decision on whether or not to work, called the labor force participation decision. The second, which applies to those who have decided to work, is on the number of hours to work. For many people, the decision on whether to work depends on the incomes available under each alternative, including income or assistance from means-tested benefits. The decision of how many hours to work may be influenced by the extent to which an increase in earnings (through more hours worked or a higher wage) is offset by higher taxes and reduced benefits. Whether moving from not working to working or from fewer to more hours worked, the combined effect of taxes and the reduction in means-tested benefits as earnings increase is called the worker’s effective marginal tax rate, referred to as the marginal tax rate in this report. A Hypothetical Example of How Marginal Tax Rates Can Reduce Benefits When Earnings Increase If a single parent with three children living in Wisconsin in 2000 who was earning $6.25 an hour received a raise to $9.25 an hour, based on 2,000 hours of work a year, her earnings would increase by $6,000. If she received SNAP benefits, those benefits would be reduced by $81 a month due to her earnings increase. If she received housing assistance, this assistance would be reduced by $177 a month. She would also owe an extra $38 a month in payroll taxes and if she worked full- time for the year, lose $1,848 (or $154 a month) due to reduced EITC benefits. As a result, out of her $500 a month raise, she would keep $50--a nearly 90 percent marginal tax rate on the earnings gain. If her earnings continue to rise, her marginal tax rates will fall greatly, as SNAP and EITC benefits will phase out entirely. With no remaining benefits to reduce, her marginal tax rate will depend solely on income and payroll taxes. earnings, these programs’ benefits made work more financially rewarding (in terms of earnings plus benefits), in comparison to the benefits available to those who do not work. The EITC, in particular, has increased incentives for people with children to join the labor force, based on our review of studies. Many factors are taken into consideration in calculating SNAP benefits, including earnings, assets, household size, age, and others. However, the basic benefit reduction rate is 24 percent, based on a reduction in the benefit equal to 30 percent of net income, mitigated by a 20 percent earned income deduction. ranging from 27 percent to over 100 percent, depending on the state of residence. (The average marginal tax rate among states was about 50 percent.) That is, if the parent lived in Nevada, he or she would lose 27 cents of each dollar in increased earnings; if he or she lived in Connecticut, the parent would actually have fewer total resources for each dollar in increased earnings due to the loss of benefits. The study’s authors noted that marginal tax rates vary greatly among states due to, among other things, differences in state tax systems and state rules for TANF and SNAP. CBO, Effective Marginal Tax Rates for Low- and Moderate-Income Workers, Publication No. 4149 (Washington, D.C.: November 2012). or ACTC. CBO had similar findings looking at a different set of programs using 2010 Census CPS data. Of households that received assistance from Medicaid or Children’s Health Insurance Program (CHIP), SNAP, TANF, or housing assistance, the majority participated in one program, most commonly Medicaid/CHIP or SNAP, and few participated in more than two programs. While studies we reviewed showed that some benefit recipients may face relatively high marginal tax rates, available research suggests these rates do not strongly affect people’s actual behavior regarding how many hours they decide to work. Ideally, an analysis should consider all the programs in which an individual participates. A 2011 review of research found that the aggregate behavioral impact on people’s incentive to work from multiple means-tested programs was very small. A more recent review of studies in 2015 concluded that “it is very hard to find large labor supply reductions for any major transfer program.” Eissa and Hoynes, “Behavioral Responses to Taxes;” and T. Hungerford and R. Thiess, The Earned Income Tax Credit. studies we reviewed, though for some groups the effects may be large. Changes in marginal tax rates associated with reduction in TANF benefits based on increased earnings were found to have little effect on either labor force participation or hours of work, according to studies we reviewed. On the other hand, receipt of housing assistance may create work disincentives, although available research is limited. One study looking at the Section 8 Housing Choice Vouchers program found that, based on a sample of program participants and nonparticipants in Chicago, the program had a negative effect on labor force participation and earnings (possibly due to reduction in hours worked for some recipients), but a positive effect on supporting incomes. In other words, people may work more without a housing benefit but their overall incomes are higher with the benefit. Another study of recipients in Wisconsin found that housing vouchers had little effect on labor force participation and a negative effect on earnings, which faded over time. Medicaid could also create work disincentives, since a modest pay increase could result in a total loss of benefits for those near the program’s income threshold.However, other programs or policies could offset potential work disincentives. For example, an increase in earnings in a new job may also be accompanied by employer-provided group health insurance, and children may lose eligibility for Medicaid but gain eligibility under CHIP. As noted, we did not review the literature on work incentives related to health insurance programs. In addition, although, we did not review the literature on the effect of child care subsidies on work incentives for this report, we have looked at this in prior work. Specifically, in a 2010 report, we found that research has linked access to child care subsidies to increases in the likelihood of low-income mothers’ employment. In that report, experts we consulted suggested that when child care prices increase (such as when a parent loses a child care subsidy), mothers may change their work hours or shift to lower-cost providers, for example, rather than exiting the labor force altogether, although other research has shown that child care problems contribute to job loss and returns to welfare for low-wage workers. While high marginal tax rates occur, people may not respond to them for various reasons. For instance, for a worker to change behavior, he or she must be aware of the marginal tax rates and the income levels at which they apply. However, these rates can be difficult for the lay person to understand and calculate, especially when multiple programs and tax provisions are involved. As discussed, high marginal tax rates are the result of interactions among programs and the tax system and vary greatly depending on the specific benefit or combination of benefits received, individual situation, and state of residence. These interactions are not transparent. Studies that have focused on interviews with low- income households indicate they often do not understand marginal tax rates associated with increased earnings or how these may affect their benefits. This may be particularly relevant with the EITC because of a long time lag between a change in work and the receipt of the tax refund at tax time. Additionally, a worker is not necessarily able to control the number of hours he or she works in response to different marginal tax rates, given constraints in work schedules or other factors such as child care. Research indicates that low-wage workers have less discretion and control over their work schedules than higher-wage workers, and that this is particularly true for those working part-time or in temporary positions. Further, reacting to high marginal tax rates that apply over narrow income ranges would not necessarily make sense for a worker over the long term. If a worker expects to have continual pay increases over his or her lifetime he or she would not necessarily decide to reduce his or her work hours because of high marginal tax rates that would attenuate as earnings grew beyond the effective income range of those rates. Behavioral effects can be difficult to isolate from other factors, and not all effects are observable. For example, not all labor supply behavior can be found in data. A worker who knowingly faces a high marginal tax rate for additional hours may seek earnings in the underground economy. Additionally, program provisions are not the only factors that may affect labor supply. The overall state of the labor market is central, in terms of the availability of employment opportunities and pay. Research also shows that inherent policy trade-offs exist for means- tested benefit programs attempting to meet multiple objectives. Work incentives and disincentives in means-tested benefit programs are intrinsically linked. When benefits are available to those who work or when benefits are tied to work (such as with the EITC), working becomes more attractive as people’s total incomes in benefits and earnings are higher than they would be without work. However, benefits are reduced and ultimately phased out as earnings rise, creating potential work disincentives. To lessen the role of work disincentives and avoid abrupt benefit cutoffs (known as cliff effects), benefits can be phased out more slowly (i.e., resulting in lower marginal tax rates). Yet a slower phase-out of benefits means increased program costs. Program costs could be contained if benefits are reduced for those with the lowest income; however, another common policy goal is to maintain adequate assistance for the least fortunate. In short, research shows that to limit program costs, it is necessary to either reduce benefits (by reducing the number of people eligible or the benefit amount) or phase benefits out more rapidly. These trade-offs pertain to assistance provided by any level of government—federal, state, or local. We provided a full draft of this report for comment to the Departments of Agriculture, Health and Human Services, Housing and Urban Development, Treasury, and the Social Security Administration. We provided relevant sections of the draft report to eight other federal agencies that administer programs included in this report as well as Census for technical comments. Most agencies that we sent the full draft or excerpts of the draft provided technical comments, which we incorporated as appropriate. USDA, HHS, Treasury, and SSA did not have additional comments; HUD provided written comments, reproduced in appendix VI. In its comments, HUD discussed the usefulness of the SPM in assessing economic conditions and people’s level of need, but stated concerns that information in this report may be interpreted erroneously, particularly because the SPM is a relatively new concept. Specifically, HUD noted that readers may interpret information we presented on program recipients’ incomes as a percentage of the SPM as evidence that programs are not targeting people in need, when, as we describe in the report, these income levels include the value of certain federal, state, and local assistance that a household receives, as well as account for various household expenses. As we explain in the report, the SPM provides information on a household’s resources—including assistance from certain government programs—to meet basic needs, and is not a measure used to determine program eligibility. HUD also noted differences in terms of recipient household types between our estimates of housing assistance using TRIM3 and HUD’s estimates using HUD program data, due to the fact that TRIM3 estimates can include recipients of housing assistance from other federal, state or local agencies. Based on HUD’s comments, we took steps to clarify the information we present on our estimates of program recipients’ incomes as a percentage of the SPM and on our estimates of recipients of housing assistance using TRIM3, and addressed other comments from HUD, as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, Secretaries of Agriculture, Health and Human Services, Housing and Urban Development, and Treasury; the Commissioner of the Social Security Administration; other federal agencies that administer programs included in this report, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions concerning this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. The objectives of this report were to examine: (1) what federal programs (including tax expenditures) are targeted to low-income individuals; (2) what are the number and selected household characteristics of people in poverty based on the Supplemental Poverty Measure (SPM); (3) what are the incomes (as a percent of the SPM) and household characteristics of people receiving benefits from selected programs; and (4) what is known about how selected low-income programs affect work incentives? To address the objectives of this request, we used a variety of methods. Specifically, we: reviewed relevant federal laws, regulations, and agency guidance; and interviewed agency officials; collected information on 82 federal low-income programs by surveying 13 federal agencies that administer these programs; analyzed 2013 data from Census Bureau’s (Census) Current Population Survey (CPS) to describe low-income households; analyzed 2012 data, the most recent available, from the Transfer Income Model, version 3 (TRIM3) microsimulation model maintained by the Urban Institute to describe recipients of eight large federal low- income programs; and conducted an economic literature review on work incentives and disincentives related to assistance from selected federal low-income programs. We conducted our work between April 2014 and July 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To address our first question, we identified federal programs, including tax expenditures, that (1) used a measure of low or limited income to determine eligibility, priority for assistance, or to target resources, or (2) have target populations that are disproportionately poor or have program purposes that presume that participants will be low-income. This included programs that targeted individuals, families, and communities. Due to their small size, we excluded programs less than $100 million in federal obligations or reduced tax revenue in fiscal year 2013. These criteria were developed by the Congressional Research Service (CRS), which has maintained a list of low-income programs for many years. To identify programs in its current list, CRS officials told us that they took various steps, including searching the Catalog of Federal Domestic Assistance for relevant programs. We augmented CRS’s list by asking relevant agencies to suggest program additions or deletions consistent with the criteria, consulting with CRS and program area experts within GAO, and adding relevant tax expenditures. We consulted with internal subject matter experts and the Department of the Treasury (Treasury) to identify relevant tax expenditures. We included tax expenditures that base an individual’s eligibility on a measure of low or limited income, or that indirectly benefit low-income individuals (for example, the Low-Income Housing Tax Credit, which allows developers and owners of qualified low- income housing projects to claim a tax credit for construction or rehabilitation costs). We excluded tax expenditures which indirectly benefit low-income individuals based on income measures for a geographic area. We also excluded tax expenditures for which the average reduction in revenue for the past 5 years was less than $100 million. To collect program information, we sent a questionnaire (or survey) on each program to the federal agencies responsible for administering it that included questions on federal obligations, numbers served, the program purpose, type of benefit or service, eligibility requirements, and other topics. To ensure that questions were understandable and that we collected the desired information, we pre-tested the survey with two federal agencies, and asked a third agency to review it. We revised it based on agencies’ feedback. We sent the survey to agencies in September 2014 and, ultimately, obtained a 100 percent response rate. We did not independently verify the legal accuracy of the information provided by the agencies, such as program purposes, eligibility requirements, or benefits or services provided. Because this was not a sample survey, there are no sampling errors. To minimize other types of errors, commonly referred to as nonsampling errors, and to enhance data quality, we employed recognized survey design practices in the development of the questionnaire and in the collection, processing, and analysis of the survey data. For instance, as previously mentioned, we pretested the questionnaire with federal officials to minimize errors arising from differences in how questions might be interpreted and to improve the likelihood that variation in responses across agencies are attributable to substantive differences between programs rather than aspects of the data collection process. We further reviewed the survey to ensure the ordering of survey sections was appropriate and that the questions within each section were clearly stated and easy to comprehend. To reduce nonresponse, another source of nonsampling error, we sent out e-mail reminder messages to encourage officials to complete the survey. We reviewed the data for missing or ambiguous responses and followed up with agency officials when necessary to clarify their responses. In some cases, we also checked other sources, such as the Office of Management and Budget’s Appendix, Budget for the U.S. Government, Fiscal Year 2015, to confirm information was generally consistent and reliable. On the basis of our application of recognized survey design practices and follow-up procedures, we determined that the data were of sufficient quality for our purposes. To answer our second question, we analyzed data from the Census’ Current Population Survey (CPS) for 2013 (calendar year), the most recent year available. Specifically, we used the public use and replicate weight files from the March 2014 CPS Annual Social and Economic Supplement, which covers 2013, to obtain demographic information about respondents and their households and calculate standard errors of our estimates. We merged this information with the Census’ SPM Research Data file for 2013, which contains microdata derived from the CPS that allows users to calculate SPM rates. Because the CPS uses a household-based data collection, its data do not include individuals living outside of a household residence, such as homeless people or those living in institutional group quarters (e.g., correctional facilities, nursing homes). As many individuals in these groups may be low-income, estimates of the size of the low-income population in this report are likely to be undercounts of the low-income population in the United States. To determine the number of people in poverty according to the SPM, we first calculated each household’s income as a percent of the relevant SPM poverty threshold. To define a household, we followed the Census definition of an “SPM Resource Unit,” which includes related individuals living together, plus unrelated children who are living with the family (such as foster children) and any cohabitors (i.e., unmarried partners) and their children. An SPM unit could consist of a single individual. Census defines a household’s SPM resources—which we call its income—to include its cash income plus the value of certain noncash benefits minus estimated expenses related to work, child support, taxes, and medical care. Each household’s SPM threshold represents the amount of income it should have available to sufficiently pay for food, housing, clothing, and utilities, plus 20 percent more for miscellaneous necessary expenses. The Bureau of Labor Statistics derives SPM thresholds from actual expenditures on these items averaged over the previous five years. Thresholds are set at the amount that approximately two-thirds of households spent or exceeded and vary by household size, homeownership, and geographic location. To describe the number of people with household incomes above and below the SPM poverty threshold, we categorized individuals into five income groups based on their household’s income as a percent of its SPM threshold: household resources less than 50 percent of its SPM threshold; household resources from 50 percent to less than 100 percent of its household resources from 100 percent to less than 150 percent of its household resources from 150 percent to less than 200 percent of its household resources 200 percent of its SPM threshold or greater. The first two income groups are considered to be in poverty according to the SPM, and the latter three groups are considered to be above the poverty line. We calculated each individual’s income group according to the official poverty measure in a similar fashion, except that we used their family income rather than their SPM unit income. Census’ official poverty statistics use the family—defined as related individuals living together— as the unit of measurement and do not include children under the age of 15 who are living with nonrelatives, such as foster children. We also followed Census procedures to define family income to include its cash income only, and we used official poverty thresholds, which vary by size of family and age of family members, but not by geographic location or homeownership. For this analysis, we categorized households into six mutually-exclusive types, as follows: Headed by elderly persons: Households (with or without children) headed by a person who is 65 or over, regardless of whether he or she has a disability. The head of household may live alone, with a spouse, or with a cohabiting partner. Headed by persons with disabilities: Households (with or without children) headed by a person under 65 with a disability. The head of household may live alone, with a spouse, or with a cohabiting partner. We used a Census Bureau definition of disability, which includes any serious difficulty hearing, seeing, concentrating/remembering/making decisions, walking/climbing stairs, dressing/bathing, or doing errands alone. Without children: Households without children headed by a person under 65 without a disability. The head of household may live alone, with a spouse, or with a cohabiting partner. Married with children: Households with at least one child headed by a married person under 65 who does not have a disability. Cohabiting with children: Households with at least one child headed by an unmarried person under 65 who has a cohabiting partner and does not have a disability. Single parent: Households with at least one child headed by an unmarried person under 65 who does not have a disability or a cohabiting partner. For each of the datasets we used in this analysis (CPS, its Annual Social and Economic Supplement, and the SPM Research file), we conducted a data reliability assessment of selected variables including those used in our analysis. We reviewed technical documentation and related publications and websites with information about the data and spoke with Census officials knowledgeable about these datasets to review our plans for analyses, as well as to resolve any questions about the data and any known limitations. We also conducted electronic testing, as applicable, to check for logical consistency, missing data, and consistency with data reported in technical documentation. We determined that the variables that we used from the data we reviewed were reliable for the purposes of this report. Throughout this report, when we present estimates from survey data, we also present the applicable margins of error (i.e., the maximum half-width of the 95 percent confidence interval around the estimate). In some cases, the confidence intervals around our estimates are asymmetrical; however, we present the maximum half-width for simplicity and for a consistent and conservative representation of the sampling error associated with our estimates. To address our third question, we used data for calendar year 2012 on recipients of selected programs from the Transfer Income Model, version 3—a microsimulation model known as TRIM3. TRIM3 is developed and maintained by staff at the Urban Institute with funding primarily from the Department of Health and Human Services, Office of the Assistant Secretary for Planning and Evaluation. The TRIM3 model simulates major governmental tax, transfer, and health programs using data from the CPS, which contains detailed information on the demographic characteristics and economic circumstances of U.S. households, including their benefits from many federal programs. However, CPS data substantially underreports the receipt of these benefits. For example, Urban Institute staff found that CPS data captured about 61 percent of Temporary Assistance for Needy Families (TANF) benefits received in 2012 and about 57 percent of Supplemental Nutrition Assistance Program (SNAP) benefits, when comparing CPS data with program administrative data (data collected by agencies used to administer the program). TRIM3 corrects for this undercounting by creating new variables for each survey respondent indicating their program eligibility, amount of benefits received, and tax liability, following the same steps that a caseworker would use to determine eligibility, as explained below. We studied eight of the low-income programs that TRIM3 modeled for calendar year 2012, the most recent year that data were available. In addition to being included in the TRIM3 model, selected programs were generally large and covered a range of basic needs.the programs we selected, along with the program unit that TRIM3 used to calculate benefits and caveats about interpreting the data. To address our fourth question, we conducted an economic literature review on whether receipt of assistance from selected programs, including EITC, SNAP, TANF, and the Section 8 Housing Choice Voucher program, affects recipients’ incentive to work. We conducted a literature search of various databases for peer-reviewed journal articles, and other publications to identify relevant studies that were published in recent years (2009 through 2014) and also reviewed some studies that were published earlier. We also inquired with agency officials for relevant studies and reviewed policy and research organization websites. Additionally, we reviewed citations of other relevant work discussed in studies. In describing findings from the literature, we included studies that were determined to be methodologically sound. Based on our review of studies, we identified reasonable conclusions about likely work incentives related to selected low-income programs. We did not do an exhaustive review of the literature on this topic. Refundable tax credit. To offset the burden of taxes, including Social Security taxes; provide an incentive to work; and provide income support to low-income families. Tax credit to reduce the amount of income taxes owed; an eligible worker may receive the credit regardless of whether taxes are owed (i.e., the credit is refundable). To allow exclusion of public assistance benefits from taxable income. Cash assistance. To provide a minimum income for aged, blind or disabled individuals who have very limited income and assets. Cash assistance. The basic federal SSI benefit is the same for all beneficiaries nationwide (reduced by any countable income). States may supplement the federal benefit. To accomplish one or more of the following: (1) provide assistance to needy families so that children may be cared for in their own homes or in the homes of relatives; (2) end the dependence of needy parents on government benefits by promoting job preparation, work, and marriage; (3) prevent and reduce the incidence of out-of- wedlock pregnancies and establish annual numerical goals for preventing and reducing the incidence of these pregnancies; and (4) encourage the formation and maintenance of two-parent families. Noncash services, including child care, work activities, child welfare services, and various social services directed toward the statutory goals of family formation and reduced nonmarital pregnancies. Cash assistance benefit levels are defined by the individual states. To enable nonresidential day care institutions to integrate a nutritious food service with organized care services for enrolled children and adults. Breakfasts, lunches, suppers and snacks that meet minimum federal nutrition standards. To improve the health of low-income elderly persons at least 60 years of age by supplementing their diets with nutritious Department of Agriculture (USDA) Foods, which are distributed through public and nonprofit private local agencies such as food banks and community action organizations. Food packages and nutrition education. To provide USDA foods to low-income households living on or near Indian reservations. Income eligible households receive a supplemental monthly food package and nutrition education. To provide free fresh fruits and vegetables to elementary school children. The goal is to create healthier school environments by providing healthier food choices. Selected schools receive reimbursement for the cost of making free fresh fruits and vegetables available to students during the school day. Program purpose To safeguard the health and well-being of the nation’s children and to encourage the domestic consumption of nutritious agricultural commodities and other food. Benefit or service provided Lunches that meet minimum federal nutrition standards and are served free or at reduced price by participating public and private elementary and secondary schools and residential child care institutions. To improve diets of needy persons living in Puerto Rico. Nutrition assistance benefits. Benefits are provided through electronic benefit transfers, and at least 75% must be used for food purchases. To reduce hunger and food insecurity, promote socialization, and promote the health and well-being of older individuals and delay adverse health conditions through access to nutrition and other disease prevention and health promotion services. Meals served in congregate settings, home- delivered meals, and related nutrition services (nutrition screening, education and assessment and counseling). To promote learning readiness and healthy eating behaviors through provision of nutritious breakfasts. Breakfasts that meet minimum federal nutrition standards and are served free or at reduced price by participating public and private elementary and secondary schools and residential child care institutions. To provide supplemental food and nutrition education to eligible women and children to serve as an adjunct to good health care during critical times of development, to prevent the occurrence of health problems, including drug abuse, and improve the health status of beneficiaries. Food assistance (provided through cash value vouchers or electronic benefit transfer card for the purchase of specifically prescribed food packages), nutrition risk screening, and related services (e.g., nutrition education and breastfeeding support, medical care referral). To help children in low-income areas get necessary nutrition during the summer months when they are out of school. Meals and snacks. To alleviate hunger and malnutrition and permit low- income households to obtain a more nutritious diet by increasing their food purchasing power. Benefits are provided through an electronic benefit transfer card to purchase food from authorized retailers. Allotments are determined on the basis of a low-cost model diet plan. To supplement the diets of low-income Americans, including elderly people, by providing them with emergency food and nutrition assistance at no cost. Food commodities that are distributed to local feeding programs and the administrative costs necessary to store and transport the commodities. To provide low-income, uninsured, and underserved women access to timely breast and cervical cancer screening and diagnostic services. Clinical breast examinations, mammograms, Pap tests, pelvic examinations, diagnostic testing, and referrals to treatment. No fees for services may be charged for women with incomes below 100% of federal poverty guidelines. Program purpose To provide comprehensive, culturally competent, quality primary health care services to medically underserved communities and vulnerable populations. Benefit or service provided Primary and additional health care services defined in statute, delivered by community health centers, migrant health centers, health centers for the homeless, and health centers for residents of public housing. To assist individuals to determine freely the number and spacing of their children through the provision of education, counseling, and medical services. A broad range of family planning methods and services. Family planning services include clinical family planning and related preventive health services; information, education and counseling related to family planning; and referral services. To elevate the health status of the Indian population to a level at parity with the general U.S. population. Hospital, medical, and dental care, behavioral health, environmental health and sanitation services as well as outpatient services and the services of mobile clinics and public health nurses, and preventive care, including immunizations and health examinations of special groups, such as school children. To improve the health of all mothers and children consistent with applicable health status goals and national health objectives established by the Secretary of Health and Human Services (HHS). Preventive and primary health care services (excluding inpatient services with some exceptions) for women, infants, and children, including children with special health care needs. To provide medical assistance to qualifying individuals, and to provide rehabilitation and other services to help such families and individuals achieve independence and self-care. Federal law provides two primary medical benefit packages for state Medicaid programs: traditional benefits and alternative benefit plans (ABPs). To provide necessary hospital care and medical services to eligible veterans. Standardized medical benefits package including preventive services; primary care, specialty care, prescription drugs, comprehensive rehabilitative services, mental health services; and emergency care in VA facilities and in non-VA facilities by contract or as authorized by 38 U.S.C. §§ 1728 or 1725. To address the unmet care and treatment needs of persons living with HIV/AIDS who are uninsured or underinsured, and therefore are unable to pay for HIV/AIDS health care and vital health-related supportive services. Benefits include a wide range of medical and supportive services to help persons living with HIV/AIDS who are uninsured or underinsured. To provide health coverage to uninsured, low-income children in an effective and efficient manner that is coordinated with other sources of health benefits coverage for children. Benefits vary by state, but all benefits provide health coverage to uninsured, low- income children. Program purpose To provide for the effective resettlement of refugees and to assist them to achieve economic self-sufficiency as quickly as possible. Benefit or service provided Cash payments to eligible individuals that are at least equal to the payment rate to a family of the same size under the state’s Temporary Assistance for Needy Families (TANF) program; and medical benefits, through payments to doctors, hospitals and pharmacists. Those eligible for Supplemental Security Income (SSI) may receive refugee cash assistance while their SSI applications are pending. To provide low-income seniors and people with disabilities with comprehensive prescription drug benefits. Prescription drug coverage with reduced premiums, copayments and other out of- pocket expenses. To transform neighborhoods of poverty into viable mixed- income neighborhoods with access to economic activities by revitalizing severely distressed public and assisted housing and investing and leveraging investments in well-functioning services, effective schools, and education programs, public assets, public transportation, and improved access to jobs. Funds to rehabilitate or replace distressed public and assisted housing; provide supportive services for residents, such as those focused on self-sufficiency, health, safety, and education; and support community improvements, such as environmental, retail, or transit improvements. To develop viable urban communities by providing decent housing and a suitable living environment and expanding economic opportunities, principally for persons of low to moderate income. Assistance with the acquisition of real property, relocation and demolition, rehabilitation of residential and nonresidential structures, construction of public facilities and improvements, public services within certain limits, activities related to energy conservation and renewable energy resources, and assistance to nonprofit entities and to profit- motivated businesses to carry out economic development and job creation/retention activities. To increase the number of families served with decent, safe, sanitary and affordable housing and expand the long-term supply of affordable housing; and to strengthen the ability of states and local governments to provide for housing needs. Assistance with the real estate development and construction activities to increase the supply of affordable housing. Promote the goal of ending homelessness; provide funding for nonprofits, states, and local governments to quickly re-house the homeless; promote use of mainstream programs and optimize self-sufficiency among those experiencing homelessness. Transitional housing for homeless individuals and families, permanent housing for disabled homeless individuals, and supportive services. Renovation, rehabilitation, or conversion of buildings into homeless shelters, services such as employment counseling, health care and education, assistance with rent or utility payments to prevent homelessness. Program purpose To devise long-term comprehensive strategies for meeting the housing needs of persons with AIDS. Benefit or service provided Housing assistance and related supportive services; real estate and construction assistance; project- or tenant-based rental assistance; short-term rent, mortgage, and utility payments to prevent homelessness; supportive services such as health services, drug and alcohol abuse treatment, day care, nutritional services, and aid in gaining access to other public benefits. (1) To promote quality, affordable housing on Indian reservations and areas; (2) to ensure access to private mortgage markets for Indian tribes; (3) to coordinate activities to provide housing for Indian tribes; (4) to plan for and integrate infrastructure resources with housing development for tribes; and (5) to promote the development of private capital markets in Indian country. Housing development, assistance to housing developed under the former Indian Housing Program, housing services to eligible individuals and families, crime prevention and safety, and model activities that provide creative approaches to solving affordable housing problems. To allow developers and owners of qualified low-income housing projects to claim a tax credit for construction or rehabilitation costs. Tax credit to reduce amount of taxes owed. To provide cost-effective, decent, safe and affordable rental housing for eligible low-income families, the elderly, and persons with disabilities. Subsidized publicly-owned rental housing units. In general, assisted households pay 30 percent of their income for rent. To allow holders of rental housing bonds to exclude interest from taxable income. Tax exclusion to reduce amount of taxes owed. To reduce the rent paid by low-income households in eligible units financed under certain Rural Housing Service programs. Rental subsidies for low-income tenants provided through payments to eligible property owners; payments make up the difference between the tenant’s rental payment to the owner and the approved rent for the unit. To provide very low-income families with decent, safe and affordable housing in the private market. Tenant-based vouchers that can be used to help recipients afford privately-owned rental housing. In general, recipients pay 30 percent of their “adjusted” income for rent, with the Department of Housing and Urban Development (HUD) providing a subsidy for the difference up to a maximum limit based on local Fair Market Rents. To provide very low-income families with decent, safe and affordable housing in the private market. Rent subsidies tied to units in privately- owned multifamily housing properties. In general, tenants pay 30 percent of their adjusted income for rent, with HUD providing a subsidy for the remaining amount up to the contract rent level. To allow persons with disabilities to live as independently as possible in the community by increasing the supply of rental housing with the availability of supportive services. Financial assistance for development of supportive housing for persons with disabilities, and rent subsidies for eligible tenants. Program purpose To help expand the supply of affordable housing with supportive services for the elderly. Benefit or service provided Financial assistance for development of supportive housing for the elderly, and rent subsidies for eligible tenants. To provide basic human amenities, alleviate health hazards, and promote the orderly growth of the nation’s rural areas by meeting the need for new and improved rural water and waste disposal facilities. Long-term low-interest loans and grants to support the construction, repair, improvement or expansion of rural water facilities. To assist low-income households, particularly those with the lowest incomes, that pay a high proportion of their income for home energy, primarily in meeting their immediate home energy needs. Assistance to households in paying their heating and cooling costs, crisis intervention, home weatherization, and services (such as counseling) to help reduce energy costs. To increase the energy efficiency of homes owned or occupied by low-income persons to reduce their total residential energy costs, and improve their health and safety. Computerized energy audits and diagnostic equipment to determine the most energy- efficient measures for each individual home; labor and materials necessary to install such energy-efficient measures. To facilitate the timely placement of children whose special needs (which may include age, membership in a large sibling group or a racial/ethnic minority group, physical or mental disabilities or other circumstances as determined by the state) would otherwise make it difficult to place them with adoptive families. One-time nonrecurring payments to assist with the costs of adopting a special needs child (e.g., adoption fees, court costs, attorney fees) and ongoing monthly payments to adoptive families; administrative and child placement services intended to promote child safety, permanency and well-being. To strengthen and improve the programs and activities carried out under Title V; to improve coordination of services for at-risk communities; to identify and provide comprehensive services for families who reside in at-risk communities. Home visiting services during pregnancy and to parents with young children up to age five. To help current and former foster youth achieve self- sufficiency. Educational assistance, vocational training, employment services, life skills training, mentoring, preventive health activities, counseling, and (subject to certain limitations) room and board. To develop child care programs that best suit the needs of children and parents in each state, to empower working parents to make their own decisions on the child care that best suits their family’s needs, to provide consumer education to help parents make informed decisions, to provide child care to parents trying to achieve independence from public assistance, and to help states implement their child care regulatory standards. Subsidized child care services that may include center-based care, group home care, family care, and care provided in the child’s own home. States also use a portion of funds for quality improvement activities, such as professional development and training, and quality rating and improvement systems. Program purpose To enforce the support obligations owed by noncustodial parents to their children and the spouse (and former spouse) with whom such children are living through locating noncustodial parents, establishing paternity, obtaining child and spousal support, and assuring that assistance in obtaining support will be available to all children who request such assistance. Benefit or service provided Noncustodial parent location, paternity establishment, establishment of child support orders, review and modification of child support orders, collection of child support payments, distribution of child support payments, and establishment and enforcement of medical support. To reduce poverty, revitalize low-income communities, and empower low-income individuals and families in rural and urban areas to become fully self-sufficient. A wide range of activities may be supported to help low-income individuals and families become self-sufficient; address the needs of youth in low-income communities; and effectively use and coordinate with related programs. To provide shelter, food, and supportive services for homeless individuals nationwide. Mass shelter, mass feeding, food distribution through food pantries and food banks, one-month utility payments to prevent service cutoff, one-month rent/mortgage payments to prevent evictions or help people leaving shelters to establish stable living conditions. To provide temporary out-of-home care for children who cannot safely remain in their own homes, until the children may be safely returned home; placed permanently with adoptive families, in a legal guardianship, or with a fit and willing relative; or placed in another planned permanent living arrangement. Payments to foster care providers to cover the costs of children’s maintenance (e.g., room and board, clothing and supplies, liability insurance, certain travel expenses); and support for administrative and child placement services intended to promote safety and permanency for children and well-being for children and their families. To promote school readiness by enhancing the social and cognitive development of children through the provision of educational, health, nutritional, social and other services to children and their families; and (for Early Head Start) to promote healthy prenatal outcomes, enhance the development of infants and toddlers, and promote healthy family functioning. Comprehensive child development services, including educational, dental, medical, nutritional, and social services to children and their families. Services may be center based, home-based, or a combination, and may be full- or part-day or full- or part-year. To provide financial assistance for needy American Indians who live on or near reservations; to support tribal programs to reduce substance abuse and alcoholism; to promote stability and security of American Indian tribes and families; and to improve Indian housing for low- income Indians. Assistance in processing welfare applications, foster care assistance services, operation of emergency shelters and similar services; cash payments to meet basic needs; counseling and family assistance services, protective day care, after-school care; and renovations, repairs, or additions to existing homes. To provide equal access to the justice system for individuals who seek redress of grievances and to provide high quality legal assistance to those would be otherwise unable to afford legal counsel. Legal services in civil cases. Program purpose To provide multifaceted systems of support services for family caregivers and grandparents or older individuals who are relative caregivers. Benefit or service provided Assistance to caregivers in gaining access to services; individual counseling, support groups, and caregiver training in the areas of health, nutrition, and financial literacy; and supplemental services, on a limited basis, to complement the care provided by caregivers. To secure and maintain maximum independence and dignity in a home environment for older individuals capable of self-care with appropriate supportive services, to remove individual and social barriers to economic and personal independence for older individuals, and to provide a continuum of care for older individuals. A large variety of services including health, mental health, education, transportation, housing, legal, abuse prevention, employment, and counseling for older individuals. Social Services Block Grants To promote economic self-sufficiency; prevent abuse or neglect of children; refer individuals into institutional care only when appropriate. Variety of social services for children, families, the aged, the mentally retarded, the blind, the emotionally disturbed, the physically disabled, and alcoholics and drug addicts. To enable eligible low-income individuals over age 55 to become self-sufficient through placement in community service positions and job training. Part-time temporary community service jobs that pay at least minimum wage, job-related training, and supportive services that are necessary to enable an individual to participate in the program. Foster Grandparent Program To provide opportunities for older low-income people to have a positive impact on the lives of children in need. Volunteer service (between 15 and 40 hours weekly), with hourly stipend, providing services to children with special or exceptional needs or with conditions or circumstances that limit their academic, social or economic development. To assist eligible youth who need and can benefit from an intensive program, operated in a group setting in residential and nonresidential centers, to become more responsible, employable, and productive citizens. Education and vocational training, including advanced career training; work experience; recreational activities; physical rehabilitation and development; job placement and counseling; and child care. To provide for the effective resettlement of refugees and to assist them to achieve economic self-sufficiency as quickly as possible. Employability and other services that address participants’ barriers to employment such as social adjustment services, interpretation and translation services, day care for children, citizenship and naturalization services. Services are designed to enable refugees to obtain jobs within 1 year of becoming enrolled. To assist eligible individuals in finding and qualifying for meaningful employment, and to help employers find the skilled workers they need to compete and succeed in business. Services range from career counseling, job training, and supportive services such as transportation and child care. Program purpose To improve educational and skill competencies of youth and develop connections to employers, mentoring opportunities with adults, training opportunities, supportive services, incentives for recognition and achievement, and leadership opportunities. Benefit or service provided Strategies to complete secondary school, alternative secondary school services, summer employment, work experience, occupational skill training, leadership development opportunities, supportive services, adult mentoring, follow-up services, and comprehensive guidance and counseling. To increase job opportunities for specified groups of disadvantaged individuals. Reduces the net cost to employers of hiring individuals who belong to specified groups. To create community learning centers that provide academic enrichment opportunities during non-school hours (i.e., before school, after school, or during summer sessions) to help students meet academic achievement standards, particularly for children who attend high- poverty and low-performing schools. Also offers families of participating students opportunities for literacy and related educational development. . Academic enrichment programs including math, science, arts, music, recreational, technology, and entrepreneurial education programs; activities for limited-English- proficient students; promoting parental involvement and family literacy; drug and violence prevention programs; counseling and character education programs. To assist adults to become literate and obtain the knowledge and skills necessary for employment and economic self-sufficiency; to assist adults who are parents to obtain the education and skills necessary to become full partners in the educational development of their children, and that lead to sustainable improvements in their family’s economic opportunities; to assist adults in completing a secondary school education and in making the transition to postsecondary education and training; and to assist immigrants and other English language learners in improving their English reading, writing, speaking, and comprehension skills and mathematics skills, and in acquiring an understanding of the American system of government, individual freedom, and the responsibilities of citizenship. Adult education and literacy activities, including adult education, literacy, workplace adult education and literacy activities, family literacy activities, English language acquisition activities, integrated English literacy and civics education, workforce preparation activities, and integrated education and training. To ensure that all children have a fair, equal and significant opportunity to obtain a high-quality education and reach, at a minimum, proficiency on challenging state academic achievement standards and state academic assessments. Additional academic support and learning opportunities for students in prekindergarten through grade 12 that attend schools with high numbers or high percentages of children from low-income families to help low-achieving children master challenging curricula and meet state standards in core academic subjects. To promote access to postsecondary education for low- income students. Need-based grants (size of grant is capped by law) to eligible students at participating institutions of higher education. To promote access to postsecondary education for low- income undergraduate students. Grants to help students with the costs of postsecondary education. Program purpose To motivate and assist students from disadvantaged backgrounds through outreach and support programs designed to help them move through the academic pipeline from middle school to post baccalaureate programs. Benefit or service provided Academic instruction; personal, academic and career counseling; tutoring; exposure to cultural events and academic programs; stipends; and grant aid. To assist students in financing the costs of postsecondary education. Federally subsidized part-time employment for students. To assist low-income students attain a secondary school diploma or equivalent and prepare for and succeed in postsecondary education. Special teacher training and early intervention services; e.g., counseling, mentoring, academic support, outreach, and supportive services designed to better promote high school graduation. Also college scholarships and other financial assistance needed for students served to be able to attend an institution of higher education. To assist institutions of higher education that serve high percentages of low-income and minority students in improving their management, fiscal operations, and educational quality, to ensure access and equal educational opportunity for low-income and minority students. Possible activities are broad and depend on the specific program. They may include, but are not limited to, assistance in planning; administrative management; development of academic programs; equipment and facilities assistance; staff development and tutoring. To increase student achievement through improving teacher and principal quality and increasing the number of highly qualified teachers, principals and assistant principals in classrooms and schools. State and local activities include professional development, support for educator evaluation systems, provision of recruitment and retention bonuses to highly qualified teachers, and other means of improving teacher quality. At the school district level, also hiring highly qualified teachers to reduce class size. To provide comprehensive education programs and services for American Indians and Alaska Natives; to provide quality education opportunities from early childhood through life in accordance with the tribes’ needs for educational, cultural and economic wellbeing in keeping with the wide diversity of Indian tribes and Alaska Native villages as distinct cultural and governmental entities. Preschool, elementary, secondary, postsecondary and adult education at BIE- funded institutions, public schools, and postsecondary institutions; financial assistance for postsecondary education at accredited institutions. To support local educational agencies in their efforts to reform elementary school and secondary school programs that serve Indian students in order to ensure that such programs: (1) are based on challenging state academic content and student academic achievement standards that are used for all students; and (2) are designed to assist Indian students in meeting those standards. Grant funds supplement the regular school program, and support comprehensive programs to meet the culturally related academic needs of Indian children. Funds support such activities as after-school programs, early childhood education, tutoring, and dropout prevention. Program purpose To improve the content knowledge of teachers and the performance of students in the areas of mathematics and science. Benefit or service provided Enhanced professional development of math and science teachers, promotion of strong teaching skills, and summer workshops or institutes. To address the unique needs of rural school districts that frequently lack the personnel and resources needed to compete effectively for federal competitive grants, and receive formula grant allocations in amounts too small to be effective in meeting their intended purposes. A wide range of services to improve rural education through enhanced services for children, teacher training, and academic programs, including for limited English proficient children. To help ensure that migratory children are afforded the same educational quality, opportunities, and assistance as other students. Supplemental education and support services, tutoring, summer and extended- day instructional services, language development services, career education services and counseling; and other services. Appendix III: Information Provided by Agencies on Federal Obligations (Fiscal Year 2013) and Number Served for Low- Income Programs (Time Periods Vary) Fiscal year 2013 obligations (in millions) Number served Average of 57.4 million individuals (including 27.9 million children) per month; total of 72.8 million individuals were enrolled during the year (including 35 million children). Average monthly based on fiscal year 2013 $57,513 27.9 million tax returns claimed the EITC (of these, 24.3 million had a credit that exceeded their tax liability) Cumulative total for calendar year 2012 $56,486 9.1 million individuals who received at least 1 payment during the year, not including those who only receive a state supplementary payment. Single point-in-time (August 2014) Average of 3.5 million individuals receiving cash assistance per month (caseload average without state supplemental funds) Single point in time (March 2013) Average number of individuals served at any time during this time period-July 2013 to October 2014. Average monthly for fiscal year 2013a (preliminary data) Cumulative total for fiscal year 2013 $3,255 Approximately 6.4 million households Cumulative total for fiscal year 2013 (preliminary data) Cumulative total for fiscal year 2013 Cumulative total for 2012- 2013 school year $919 5,100 multifamily rental units developed or rehabbed per year, with 463,000 total from the program inception (1992). Affordability terms are generally for 20 years. Single point in time for fiscal year 2014-Sept. 2014 Cumulative total for fiscal year 2013 $779 Agency does not track annual number (not all for low- income) Cumulative total for program year 2012 (July 1, 2012-June 30, 2013) Cumulative total for fiscal year 2013 Cumulative total for program year (July 1, 2013-June 30, 2014) Single point in time fiscal year 2013 (March 2013) New enrollees in fiscal year 2013 plus those who enrolled in previous years and received services at any point in fiscal year 2013. Cumulative total for 2012 school year (not all for low- income) Time Period for number served (vary based on information provided by agencies) n/a (not all for low- income) Cumulative total for calendar year 2012 Average number of approved certifications for fiscal year 2012 and fiscal year 2013. The Additional Child Tax Credit is the refundable portion of the Child Tax Credit. No federal spending in obligations. No federal spending in obligations for the tax credit. However, in fiscal year 2013, the Department of Labor provided about $18 million in grants to states to process certification requests for the Work Opportunity Tax Credit, according to the agency. Based on our analysis of agency responses to our survey, 49 of the 82 federal low-income programs we identified include income or financial eligibility requirements for potential recipients at the individual, household, or related level (see table 8). Thirty-three programs do not assess income eligibility at the individual (or related) level. Instead these programs allocate resources based on a measure of financial need, but offer services more broadly; give priority to those who are low-income; or serve a group that is presumed low-income or which tends to be disproportionately low-income. The table is not meant to be a comprehensive list of program eligibility criteria. For example, for certain programs, agencies reported that states have some flexibility to set specific financial eligibility criteria. Any such state-determined criteria are not shown in this table. The table also does not show any information provided on automatic or categorical eligibility. Additionally, agencies reported that some programs use other criteria, such as age, to determine eligibility in addition to income or financial requirements, which are not included in this table. If an agency reported that a program used more than one type of income eligibility criteria, we counted it only in one category. We collected descriptive information on the recent efforts of federal agencies to evaluate five selected programs: the Earned Income Tax Credit (EITC), Section 8 Housing Choice Vouchers (Section 8 Vouchers), Supplemental Nutrition Assistance Program (SNAP), Supplemental Security Income (SSI), and Temporary Assistance for Needy Families (TANF). We selected these programs because they are financially large programs, meet basic needs through different types of assistance, and vary in how benefits are administered. We focused on impact evaluations conducted or sponsored by the respective agencies, published in 2010 or later, that were related to participant outcomes (excluding, for example, those related to program processes, operations, or integrity). In addition to evaluations, we looked at other recent research conducted or sponsored by the respective agencies that provided information on program participants. For each program, we also looked at performance measures, focusing on those related to participant outcomes. In addition, we reviewed agency information available online (e.g., evaluations, research, and annual performance reports) and conducted semi-structured interviews with knowledgeable agency officials. Federal administering agencies are: the Department of the Treasury (Treasury) for EITC, the Department of Housing and Urban Development (HUD) for Section 8 Vouchers, the Department of Agriculture (USDA) for SNAP, Social Security Administration (SSA) for SSI, and Department of Health of Human Services (HHS) for TANF. Four of the five agencies conducted or sponsored recent evaluations related to participant outcomes for their respective selected programs. Evaluations focused on a range of subjects, including employment practices and self-sufficiency (TANF, SSI, Section 8 Vouchers), food security and healthy food consumption (SNAP), and family outcomes (Section 8 Vouchers), among others (see table 9; see table 10 at the end of this section for full names of evaluations). Unlike the four spending programs we examined, the Department of the Treasury (Treasury) does not conduct program evaluations related to program outcomes on the EITC or any other tax expenditure. In our prior work, we have recommended that the Office of Management and Budget (OMB) set up a performance evaluation framework for tax expenditures, which represent a substantial federal commitment. However, Treasury staff are aware of and contribute to the academic research on participant outcomes related to the EITC, such as on work, poverty, and household income. Agencies administering Section 8 Vouchers, SNAP, SSI and TANF generally did not evaluate their respective programs as a whole, with the exception of USDA’s evaluation of SNAP’s effect on food security and food spending. Instead, these agencies typically evaluated different practices within the program, often experimenting with new and innovative practices. For example, the SNAP Healthy Incentives Pilot Evaluation was aimed at testing new types of financial incentives designed to make fruits and vegetables more affordable for SNAP participants. Another example is TANF’s Pathways to Advance Career Education evaluation, which is currently testing promising strategies for increasing employment and self-sufficiency among low-income families. Many evaluations across the four programs were also conducted to study the effects of the program on particular sub-populations of participants. For example, SSA’s Youth Transition Demonstration tested strategies designed to help youth with disabilities who were receiving SSI to transition to economic self-sufficiency as adults, while USDA had evaluations looking at food security among the elderly and working poor populations. For each of the four programs, the agencies conducted evaluations for a variety of reasons. Some of the programs’ evaluations were required by law. For example, HUD was required by law to conduct the Moving to Opportunity for Fair Housing demonstration program evaluation, which presented the long-term impacts of moving people, including Section 8 Voucher recipients, from high-poverty neighborhoods in large inner cities Other evaluations we reviewed were to lower-poverty neighborhoods.determined by the agencies, often in line with a larger evaluation plan or strategy aimed at supporting certain agency goals, according to officials from the HHS, USDA, and HUD. For example, USDA’s evaluations on education programs to promote healthier eating for low-income children, women, and seniors was based on USDA’s goals, according to officials. Officials told us findings from evaluations have helped inform program design and administration at the federal and state level. For example, based on findings from the Ticket to Work and Self-Sufficiency Evaluations, SSA officials said the agency changed the program’s design to incentivize service providers to serve disability beneficiaries who are more difficult to employ. Agency officials told us they frequently share findings and best practices with state agencies administering the programs to inform their program or policy decisions. For example, USDA officials stated that the SNAP Education and Evaluation studies have helped several states develop their own SNAP education programs. Agencies disseminated findings to administrators and other interested parties through various channels, including research clearinghouses, journals, conferences, and agency websites. Officials from these four agencies told us that evaluation findings also helped them determine financial decisions, such as resource allocation, or to provide support for budget requests to Congress. Agencies faced a number of challenges with regards to their evaluation efforts, including financial, methodological, and administrative limitations. Agency officials informed us that large-scale, multi-year evaluations are resource intensive, and limited or short-term funding can make it difficult to perform these evaluations, particularly for program wide research. Officials from USDA informed us that it is helpful when money is designated by law for specific evaluations, as was the case with the Healthy Incentives Pilot, which was designated funding in the Food, Conservation, and Energy Act of 2008 (2008 Farm Bill). According to officials, methodological challenges can also limit their evaluation efforts. For example, SNAP and SSI benefits generally must be provided to eligible applicants, which makes it difficult to establish a control group. Under TANF, states generally design and administer their own programs, making it difficult to assess the program more broadly. Furthermore, HHS officials informed us that state and local TANF administrators are not required to participate in evaluations. Therefore, it can be difficult to persuade them to participate because of the burden of additional work and costs that evaluations may create for them. We recently found that the structure of TANF can present challenges for HHS to conduct evaluations and how this may leave TANF recipients without access to promising approaches for employment. Agencies administering SNAP, SSI, TANF, and Section 8 Vouchers also sponsored other recent research—that were not impact evaluation studies—that informed their understanding of program participants, including when participants receive benefits from other similar programs. Some research we reviewed provides information on participants, such as their demographic characteristics and economic circumstances. For example, USDA conducted research on the characteristics and circumstances of SNAP participants with zero income by using Census’ Survey and Income Program Participation (SIPP) data to conduct cross- sectional and longitudinal analysis that would not have been possible with USDA administrative data alone. Other studies provided information on participants’ or potential participants’ experiences with the programs, such as need for assistance or reasons for participating, leaving, or returning to the program (Section 8 Vouchers, SNAP, SSI, TANF). Agency research also identified experiences and challenges that participants faced outside of the program, such as crime (Section 8 Vouchers), education (Section 8 Vouchers, SSI), and health issues (SSI). Agencies also conducted cross-program research, which included examining the extent to which program participants received other benefits, such as HHS’s annual Indicators of Welfare Dependence reports, which analyze statistics indicating and predicting welfare dependence among TANF, SNAP, and SSI recipients.work across programs to conduct research regarding large cross-cutting goals, such as interagency research related to ending or preventing homelessness. The four selected direct spending programs also track program performance measures, including those related to participant outcomes as well as performance measures related to administrative performance. Examples of outcome focused performance measures include those related to employment (TANF, SSI, Section 8 Vouchers), food security (SNAP), and the level of poor housing situations (Section 8 Vouchers). Measures focused on administrative performance include those related to payment accuracy (SSI, SNAP), participation rates (TANF), and utilization rates (Section 8 Vouchers). Selected GAO Reports on Program Evaluation Program Evaluation: Some Agencies Reported that Networking, Hiring, and Involving Program Staff Help Build Capacity, GAO-15-25 (Washington, D.C.: November 13, 2014). Program Evaluation: Strategies to Facilitate Agencies' Use of Evaluation in Program Management and Policy Making, GAO-13-570 (Washington, D.C.: June 26, 2013). Performance Measurement and Evaluation: Definitions and Relationships (Supersedes GAO-05-739SP), GAO-11-646SP (Washington, D.C.: May 2, 2011). Program Evaluation: Experienced Agencies Follow a Similar Model for Prioritizing Research, GAO-11-176 (Washington, D.C.: July 14, 2011). Program Evaluation: A Variety of Rigorous Methods Can Help Identify Effective Interventions, GAO-10-30 (Washington, D.C.: November 23, 2009). Program Evaluation: An Evaluation Culture and Collaborative Partnerships Help Build Agency Capacity, GAO-03-454 (Washington, D.C.: May 2, 2003). Selected GAO Reports Related to Selected Programs TANF: Action Is Needed to Better Promote Employment-Focused Approaches, GAO-15-31 (Washington, D.C.: November 19, 2014). Rental Housing Assistance: HUD Data on Self-Sufficiency Programs Should Be Improved, GAO-13-581 (Washington, D.C.: July 9, 2013). Moving to Work Demonstration: Improved Information and Monitoring Could Enhance Program Assessment, GAO-13-724T (Washington, D.C.: June 26, 2013). TANF Potential Options to Improve Performance and Oversight, GAO-13-431 (Washington, D.C.: May 15, 2013). Tax Expenditures: Background and Evaluation Criteria and Questions GAO-13-167SP (Washington, D.C.: November 29, 2012). Social Security Disability: Participation in the Ticket to Work Program Has Increased, but More Oversight Needed, GAO-11-828T (Washington, D.C.: September 23, 2011). Domestic Food Assistance: Complex System Benefits Millions, but Additional Efforts Could Address Potential Inefficiency and Overlap among Smaller Programs, GAO-10-346 (Washington, D.C.: April 15, 2010). Government Performance and Accountability: Tax Expenditures Represent a Substantial Federal Commitment and Need to Be Reexamined, GAO-05-690 (Washington, D.C.: September 23, 2005). Section 8 Housing Choice Vouchers Evaluation of the Family Self-Sufficiency Program: Prospective Study http://www.huduser.org/portal/publications/FamilySelfSufficiency.pdf Family Options Study http://www.huduser.org/portal/family_options_study.html Moving to Opportunity for Fair Housing Demonstration Program - Final Impacts Evaluation http://www.huduser.org/portal/publications/pubasst/MTOFHD.html Rent Reform Demonstration http://www.mdrc.org/project/rent-reform-demonstration#overview Supplemental Nutrition Assistance Program (SNAP) Reaching Underserved Elderly and Working Poor SNAP Evaluation http://www.fns.usda.gov/reaching-underserved-elderly-and-working-poor-snap-evaluation-findings-fiscal-year-2009-pilo ts SNAP Education and Evaluation Study http://www.fns.usda.gov/snap-education-and-evaluation-study-wave-i-final-report Supplemental Security Income (SSI) Improving Access to Benefits for Persons with Disabilities Who Were Experiencing Homelessness: An Evaluation of the Benefits Entitlement Services Team Demonstration Project http://www.ssa.gov/policy/docs/ssb/v74n4/v74n4p45.html Promoting Readiness of Minors in SSI (PROMISE)- Evaluation Design Report http://www.ssa.gov/disabilityresearch/promise.htm TANF/SSI Disability Transition Project, http://www.acf.hhs.gov/programs/opre/research/project/tanf/ssi-disability-transition-project Ticket to Work Evaluations http://www.ssa.gov/disabilityresearch/twe_reports.htm Youth Transition Demonstration Evaluation http://www.ssa.gov/disabilityresearch/youth.htm Youth Transitioning Out of Foster Care: An Evaluation of a Supplemental Security Income Policy Change http://www.ssa.gov/policy/docs/ssb/v73n3/v73n3p53.html Temporary Assistance for Needy Families (TANF) Employment Retention and Advancement Project http://www.ssa.gov/policy/docs/ssb/v73n3/v73n3p53.html. Job Search Assistance (JSA) Strategies http://www.acf.hhs.gov/programs/opre/research/project/job-search-assistance-evaluation Pathways to Advance Career Education (PACE) http://www.acf.hhs.gov/programs/opre/research/project/innovative-strategies-for-increasing-self-sufficiency Subsidized and Transitional Employment Demonstration (STED) http://www.acf.hhs.gov/programs/opre/research/project/job-search-assistance-evaluation TANF/SSI Disability Transition Project (listed above under SSI) In addition to the contact named above, Gale Harris (Assistant Director), Theresa Lo (Analyst-in-Charge), Matthew Hunter, Brittni Milam, Rhiannon Patterson, Max Sawicky, Rosemary Torres Lerma made significant contributions to this report. Also contributing significantly to this report were Chuck Bausell, James Bennett, Ted Burik, David Chrisinger, Sarah Cornetto, and Kirsten Lauber.
The federal government provides assistance aimed at helping people with low-incomes who may earn too little to meet their basic needs, cannot support themselves through work, or who are disadvantaged in other ways. With fiscal pressures facing the federal government and the demands placed on aid programs, GAO was asked to examine federal low-income programs. This report (1) describes federal programs (including tax expenditures) targeted to people with low incomes, (2) identifies the number and selected household characteristics of people in poverty, (3) identifies the number, poverty status, and household characteristics of selected programs' recipients, and (4) examines research on how selected programs may affect incentives to work. For a list of low-income programs that were $100 million in obligations or more in fiscal year 2013, GAO consulted with the Congressional Research Service; surveyed and interviewed officials at relevant federal agencies; and reviewed relevant federal laws, regulations, and agency guidance. GAO also conducted analyses on low-income individuals using Census data on the SPM and official poverty measure and microsimulation data from the Urban Institute that adjusts for under-reporting of benefit receipt in Census survey data. To examine labor force effects, GAO reviewed economic literature. Selected low-income programs were large in dollars and helped meet a range of basic needs. GAO is not making new recommendations in this report. GAO clarified portions in response to comments from one agency. More than 80 federal programs (including 6 tax expenditures) provide aid to people with low incomes, based on GAO's survey of relevant federal agencies. Medicaid (the largest by far), the Supplemental Nutrition Assistance Program (SNAP), Supplemental Security Income (SSI), and the refundable portion of the Earned Income Tax Credit (EITC) comprised almost two-thirds of fiscal year 2013 federal obligations of $742 billion for these programs. Aid is most often targeted to groups of the low-income population, such as people with disabilities and workers with children. Survey responses showed that criteria used to determine eligibility vary greatly; most common were variants of the federal poverty guidelines, based on the Census Bureau's official poverty measure. In 2013, 48.7 million people (15.5 percent), including many households with children, lived in poverty in the United States, based on Census's Supplemental Poverty Measure (SPM). This measure takes into account certain expenses and federal and state government benefits not included in the official poverty measure. The SPM is not used to determine program eligibility; however, it does provide more information than the official measure on household resources available to meet living expenses. In 2013, the SPM poverty threshold ranged from $21,397 to $25,639 for a family of four, depending on housing situations. Based on six mutually exclusive household types GAO developed, individuals in a household headed by a person with a disability or a single parent had the highest rates of poverty using the SPM, while childless or married parent households had larger numbers of people in poverty using the SPM. In 2012, the most recent year of data available, GAO estimated that 106 million people, or one-third of the U.S. population, received benefits from at least one or more of eight selected federal low-income programs: Additional Child Tax Credit, EITC, SNAP, SSI, and four others. Almost two-thirds of the eight programs' recipients were in households with children, including many married families. More than 80 percent of recipients also lived in households with some earned income during the year. Without these programs' benefits, GAO estimated that 25 million of these recipients would have been below the SPM poverty threshold. Of the eight programs, EITC and SNAP moved the most people out of poverty, however, the majority of recipients of each of the programs were estimated to have incomes above the SPM threshold, after accounting for receipt of benefits. Research suggests that assistance from selected means-tested low-income programs can encourage people's participation in the labor force, but have mixed effects on the number of hours they work. Changes in certain low-income programs through the years, including the EITC, have enhanced incentives for people to join the labor force, according to studies. While workers who receive means-tested benefits face benefit reductions as their earnings rise, research shows that various factors limit how much people change their work behavior in response. For example, people may not be aware of such changing interactions in a complex tax and benefit system or be able to control the number of hours they work, according to studies. Research also shows that enhancing work incentives can create difficult policy trade-offs, including raising program costs or failing to provide adequate assistance to those in need.
The Army is made up of both operating and generating forces. Operating forces consist of combat units, including divisions, brigades, and battalions that conduct operations around the world, including contingency operations in Iraq and Afghanistan, as well as humanitarian assistance and civil support missions. The Army’s generating force consists of organizations that provide a broad range of support for the operating forces, such as training, supply, and maintenance. TRADOC is the largest part of the generating force and develops the Army’s soldiers and civilian leaders to ensure that the Army remains a modern and capable fighting force by developing warfighting concepts and doctrine and by providing recruiting, training, and associated support for military personnel. TRADOC’s core functions include, among other things, providing initial military training, leadership courses, and continued professional education courses to soldiers at all levels. TRADOC carries out its mission at 32 schools located on 15 different installations throughout the continental United States. The schools specialize in such training as infantry, intelligence, and aviation (see fig. 1). TRADOC uses a mix of three types of personnel—military, Department of the Army civilian, and contractor—to train soldiers. These personnel serve in various key roles, such as instructors who teach the classes;  doctrine developers who develop, review, and update the doctrine, including field manuals and training circulars; training developers whose function is to analyze, design, develop, and evaluate training and training products; and training support personnel who perform functions necessary to conduct field training exercises. To determine its requirements for the aforementioned and other personnel, TRADOC uses a variety of methods, including modeling and manpower studies. For example, to determine personnel requirements for instructors, TRADOC uses a model that takes the projected student workload and determines the number of instructors needed to meet that workload. The model also relies on other inputs and assumptions, such as optimal class size. TRADOC has developed similar models to determine personnel requirements for training developers and training support personnel. Once TRADOC has determined its requirements based on model outputs and other variables, the Army determines authorized personnel levels—the maximum number of military and civilian personnel TRADOC can assign in order to execute its mission. Authorized personnel levels are typically less than requirements because of budget constraints. Once the Army has determined authorized personnel levels, TRADOC positions can then be assigned to military personnel or filled by Department of the Army civilians or contractors. During fiscal year 2010, TRADOC was authorized about 41,000 positions. TRADOC received about $4.1 billion of the Army’s appropriation, of which 62 percent or approximately $2.5 billion was dedicated to training and training development. TRADOC leadership decided how those appropriated funds were to be allocated to each of its schools. TRADOC officials expressed concerns about shortfalls in key personnel. However, limitations exist in their approach to determine personnel requirements. In addition, TRADOC has not conducted a personnel mix assessment to determine the optimum mix of military, Army civilian, and contractor personnel. TRADOC’s stated personnel requirements for instructors, training developers, and training support personnel have remained relatively steady from fiscal years 2005 through 2011, as shown in figure 2. The figure also shows that over the same time period, TRADOC’s student workload has increased by about 185,000, or about one-third (399,371 to 584,299), as a result of factors such as increases in the Army’s end strength to support ongoing operations, which have led to a larger number of soldiers who need TRADOC training. In his 2010 memorandum, the TRADOC Commander raised concerns that manning shortfalls were putting TRADOC’s ability to successfully perform its core competencies and functions at risk. Similarly, at the time of our review, TRADOC headquarters and school officials stated that considering increases in student workload, TRADOC continued to face shortages in instructors, training developers, and training support personnel. However, we found limitations in the models that TRADOC uses for identifying personnel requirements for these key personnel. In determining instructor personnel requirements, TRADOC uses an instructor model based on a formula that relies on assumptions and inputs. According to the Army’s regulation on manpower management, Army commands are required to review the models they use to determine manpower requirements at least every 3 years, or more often as needed. Further, the regulation requires the U.S. Army Manpower Analysis Agency to review and recommend approval of these models to the Assistant Secretary of the Army for Manpower and Reserve Affairs, who is responsible for approving the models. However, we found that the instructor personnel requirements model has not been updated since 1998, and the assumptions and inputs used in the model may not reflect changes in how training is currently provided. For example:  The model assumes that a course can and will be conducted in the same way every time it is taught. However, we found that the way in which TRADOC delivers a course can vary. For example, instead of students traveling to attend training in the classroom, schools may use distance learning, which allows soldiers to complete computer- based training courses or selected modules of a course at their permanent duty locations. Some of these courses are self-paced and not instructor led; other courses are instructor led and utilize technology to reach more students. Because the model assumes that all courses are taught the same way, it does not take into consideration that in some cases, using distance learning may reduce the need for instructors while in other cases additional instructors may be needed to lead distance learning courses.  The model also assumes that TRADOC schools can use the same instructor to teach different courses. Specifically, there is an assumption that once an individual is certified as an instructor, that individual can teach any course by following the contents in the course curriculum. While this may be true for some general courses, more specialized courses require a background or familiarity with the subject matter in order to teach it. Because the model assumes that any instructor can teach any course, it may not accurately reflect the total number of instructors needed to teach all courses.  The model uses inputs that include a number of variables, such as workload requirements and data from course curricula. For example, the model uses indirect contact hours—the time allotted for instructor duties not related to formal class time, such as reviewing lesson plans and providing private counseling to students. However, according to a TRADOC official, using indirect contact hours in the model could cause inefficiencies in determining personnel requirements because the indirect contact hours have not been reviewed and updated in 10 years. Another input that goes into the model is the manpower availability factor—the amount of time personnel is available to perform their primary duties. According to Army Regulation 570-4 for manpower management, during normal operations in the United States, military and civilian personnel should generally be available to perform their assigned tasks for 145 hours per month, or approximately 18 days. However, according to some TRADOC school officials, expecting these personnel to actually be available for that many hours may be unrealistic. Specifically, they stated that instructors were not available for the mandated amount of time because they had to perform other activities, such as attending training or taking sick and annual leave. Since the model does not fully account for these activities, it may not accurately identify the total number of instructors needed to execute the training mission. In determining training developer personnel requirements, TRADOC had used a model but discontinued its use. The training developer model had not been updated since 1996, and TRADOC stopped using it in 2006 because the requirements it calculated were higher than those needed to complete the workload. Instead, in an effort to better align its requirements to its workload, TRADOC decided to use estimates of the time required to develop training products as the basis to determine the needed numbers of training developers. However, in 2009, the Army Manpower and Force Analysis Directorate stated that using this methodology was not a valid means of determining personnel requirements and TRADOC stopped using it. As a result, since 2009, TRADOC has used the same estimated number, with minor modifications, for training developer requirements from one fiscal year to the next. TRADOC officials recognize that this is not a valid approach and that they need to use an updated model to determine training developer requirements. According to a TRADOC official, TRADOC tentatively plans to begin a review to develop a new model in the second half of fiscal year 2012. The official did not believe TRADOC would be able to meet this deadline, however, because of competing priorities to develop other models. In determining the number of training support personnel requirements, TRADOC uses a model that was updated and approved in 2010. The model originally defined training support personnel as individuals who performed classroom and field training activities. As part of the update, training support personnel were redefined to include only individuals who conducted some field training activities. As a result, requirements related to some activities covered under the original definition are not identified under the current process. For instance, prior to the update, the training support personnel model included requirements for individuals responsible for activities such as resetting computers in the classrooms or delivering ammunition to shooting ranges. These activities are no longer conducted by individuals who are considered training support personnel but are still needed to conduct the training mission. TRADOC has not developed personnel requirements models for these activities or factored in how these activities may be integrated into existing personnel requirements models. For example, TRADOC officials stated that they intend to develop a personnel requirements model for ammunition delivery, but as of July 2011, the model had not been developed. Similarly, tasks such as setting up computers may be assigned to instructors, but the instructor model has not been updated to reflect these workload requirements. Several of these limitations were also identified in an August 2010 Army Audit Agency review of TRADOC’s personnel requirements determination process for institutional training. While TRADOC acknowledged these limitations and stated that it would work to address them, we found that as of July 2011, these limitations remained. According to TRADOC officials, these models have not been developed or updated because of a lack of manpower and competing demands on personnel time. Currently, the office responsible for developing personnel requirements models stated that priority has been placed on developing models that do not currently exist rather than updating existing models. TRADOC is in the process of revising its overall approach to training, moving from traditional classroom training to a more technology-driven approach intended to enable soldiers to learn using a variety of techniques, including simulations, gaming technology, or other technology-delivered instruction. Officials stated that the impact on personnel requirements from this new approach to learning is unknown, but believe that it may reduce personnel requirements, particularly for instructors, because of an increased reliance on technology rather than classroom instruction. Officials acknowledge that these changes will need to be incorporated into personnel requirements models. However, we found that as of July 2011, TRADOC had not established a timeline for updating these models. Without updated models, TRADOC cannot ensure that it is accurately identifying the numbers of instructors, training developers, and training support personnel needed to carry out its training mission. According to Army Regulation 570-4, determining manpower requirements includes a determination of optimum manpower mix. This step is typically completed after requirements and authorized personnel levels have been determined. TRADOC relies on a mix of military personnel, Army civilians, and contractors to accomplish its training mission. School officials we interviewed stated that their preference would be to solely use military personnel to provide training because military personnel have the knowledge and credibility gained from combat experience to teach and mentor soldiers; however, they recognize that this is impossible because of constraints on the availability of military personnel. Given those constraints, these officials agree that it is important to use an appropriate mix of personnel in order to maximize the benefit that each type of personnel adds to training. For example, officials say that civilians bring continuity to in-house training since they do not deploy and contractor personnel bring flexibility that allows TRADOC officials to adjust personnel to meet fluctuations in student workload. According to TRADOC leadership, TRADOC schools rely heavily on contractors to execute training. TRADOC officials estimated that some courses are taught by an instructor mix that is 30 percent military and 70 percent civilians, including contractors. In addition, officials at several schools noted that without contractors they would not be able to meet the student workload. While several officials stated that the mix of personnel they are currently using is inappropriate, most places we visited— including TRADOC headquarters—could not provide us with data about the number of contractors they were using to accomplish TRADOC’s training mission. As a result, they were unable to identify what their true reliance on contractors is or whether the number of contractors being used is too high. We found that officials at only one school that we visited had documented the number of contractor personnel. In fiscal year 2010, using TRADOC data, we determined that contractors made up 46 percent of instructor personnel and 64 percent of training developer personnel at that school. In March 2010, the Department of the Army issued a memorandum directing generating force commands, including TRADOC, to develop a Generating Force Manpower Mix Assessment no later than June 1, 2010. According to the memorandum, the results of the manpower mix assessment are key to shaping strategic force structure decisions and should enable commands, including TRADOC, to do the right things in the most efficient manner to support Army requirements and standards. The memorandum identified several factors that should be considered in conducting the assessment, including reviewing tasks that are no longer necessary or are not being performed because of current manpower mix or levels. Further, the memorandum requires commands to project the mix of military, civilian, and contractor personnel in the most effective and cost efficient means as part of the assessment. According to TRADOC headquarters officials, each school should do its own mix assessment. However, TRADOC school officials stated that the schools have not done such assessments because of constant changes in funding, student workload, and availability of military personnel. TRADOC leadership recognizes that a personnel mix assessment is important and should be completed to ensure that the right mix of military, civilian, and contractor personnel are used as instructors in TRADOC courses. However, as of July 2011, TRADOC had no specific plan with milestones for its schools to conduct these personnel mix assessments. TRADOC has taken various workforce management actions in order to execute its training mission, but its quality assurance program does not capture the level of detail needed to evaluate the impact of these steps on the quality of training provided to soldiers. Workforce management actions include increasing student to instructor ratios, using contractors to augment military and Army civilian instructors, and reassigning doctrine and training developers to serve as instructors. TRADOC has established a quality assurance program to collect information that it uses to measure the effectiveness and quality of its training. TRADOC’s regulation setting out the description and requirements for the evaluation and quality assurance program assigns responsibilities to TRADOC regarding evaluations of its own courses and training materials. In addition, the TRADOC pamphlet that provides implementing guidance, formats, and techniques for TRADOC’s evaluation and quality assurance programs details different types of information that should be considered as part of the evaluation, such as whether training objectives are met (e.g., whether students are able to perform core tasks) and whether instructors perform to standards. The Command’s quality assurance program consists of the following types of evaluations:  Accreditation. Accreditation is the formal recognition the TRADOC Commander gives to TRADOC schools, granting them authority to conduct or continue to conduct training. It certifies that a school’s training program, processes, personnel, administration, operations, and infrastructure are adequate to support training to course standards and that the school is adhering to TRADOC training guidance and directives. TRADOC guidance calls for schools to be evaluated by TRADOC every 3 years, using the Army Enterprise Accreditation Standards. Quality assurance officials at the schools conduct self-assessments to prepare for the accreditation process conducted by headquarters officials, and they also conduct accreditations of their own educational programs. Internal evaluation. This evaluation process includes classroom observations and internal surveys. During classroom observations, evaluators observe classes to ensure that training is being delivered in the right sequence, among other things. Evaluators use instructor performance checklists to capture information, such as how well the instructor introduces the course and presents course materials. An internal survey is conducted at the beginning of each course to determine the students’ knowledge of course content prior to starting the course and again at the end of the course to determine if the objectives of the course have been met. Questions on this survey focus on critical tasks for the job specialty that the course covers. Students also have an opportunity to write in comments on the survey.  External evaluation. This process uses an external survey to determine if soldiers who attended a course can meet job performance requirements as a result of the training they received or if additional training is needed. The survey is sent to the soldier or the soldier’s supervisor, usually no sooner than 6 months after the soldier completes the training course. External evaluations determine if the training the soldiers receive prepares them to meet the needs of the operational Army. TRADOC has taken a number of workforce management actions in order to execute its training mission. As discussed below, these actions include increasing student to instructor ratios, using contractors to augment military and Army civilian instructors, and using doctrine and training developers as instructors. We found that there were mixed views about the impact of these actions on the quality of training from students and TRADOC officials at various levels. On the one hand, in a 2010 memorandum, TRADOC leadership raised concerns that the steps it had taken may be affecting TRADOC’s ability to carry out its core competencies, which include providing quality training. On the other hand, survey results from students indicated that they believed they received quality training. Further, officials at TRADOC headquarters and schools, including quality assurance personnel, as well as some students stated that they believe that quality training is typically being provided. TRADOC has not established any metrics to measure the impact of its workforce management actions on training quality and without such metrics is unable to definitively determine what impact, if any, has occurred. At times, TRADOC has accommodated increases in student workload by changing the student to instructor ratio for certain courses, increasing the number of students in the classroom without adding instructors. Officials throughout TRADOC expressed concerns that increasing student to instructor ratios from what is prescribed in course curricula would affect the quality of training because larger class sizes reduce the amount of one-on-one time that instructors can spend with students. For example, at the infantry school at Fort Benning, school officials stated that one of the mortar courses was designed to be taught with a student to instructor ratio of 8:1 but instead was taught with a ratio of 23:1 in order to meet the student workload. According to officials at the school, instructors had less time to spend with students, and therefore students were only able to become familiar with mortars but not get fully trained on them. Similarly, officials at the aviation school at Fort Rucker stated that their student to instructor ratio for one class sometimes has to be increased from 2:1 to 3:1. According to officials, the safety risk for this class was increased and the quality of the training was affected because the instructor’s attention had to be divided among three students rather than two. The Army Audit Agency in its 2010 study identified these issues and others associated with the student to instructor ratios. For example, the study found that when student to instructor ratios increased, officer instruction became more task-centric, with less emphasis on leadership training. Quality assurance personnel evaluate compliance with student to instructor ratios by comparing the ratio used in the training environment to that outlined in the curricula. However, the quality assurance program captures only whether instructors are teaching the course with the student to instructor ratios identified in the curriculum; it does not further investigate the impact on training quality of the increased ratio. As a result, no data are captured that can be used to evaluate the impact of having more students assigned to one instructor. In its report, the Army Audit Agency recommended that TRADOC identify metrics that could capture data on the effects of not complying with recommended student to instructor ratios. TRADOC responded that it would require quality assurance personnel at the schools to report any quality of training issues they found when evaluating their courses—including data on student to instructor ratios—beginning in the first quarter of fiscal year 2011. However, we found that as of July 2011, TRADOC schools had not yet been required to report this information. As a result, leadership lacks pertinent information that could help them assess the impact of increasing student to instructor ratios. As discussed earlier, TRADOC uses contractors to augment military and Army civilian training personnel. A mix of views has been expressed regarding the quality of training provided by contractor personnel. For example, in 2010, the Commanding General of TRADOC issued a memorandum stating that using contractors has led to a “degreening” of the force, meaning that not enough military personnel are involved in training soldiers. TRADOC officials believe that having military personnel in the classroom is extremely important because military personnel have the knowledge and credibility gained from combat experience to teach, coach, and mentor the soldiers they train. Alternatively, other TRADOC officials have noted the value that contractors bring by providing schools the flexibility to augment the number of instructors they have available in order to accommodate surges in student workload and fluctuations in the number of classes offered. Still other officials believe there is no significant difference in the quality of training provided by military, civilian, and contractor personnel. They noted that some of the contractor personnel used to provide training have prior military experience. For example, at Fort Huachuca, quality assurance officials stated that most of the contractor personnel used are former military personnel who had deployed at least one time. TRADOC’s quality assurance program does not systematically collect the data needed to evaluate the impact of the type of instructor on the quality of training. While TRADOC’s quality assurance program captures information such as how well the instructor follows lesson plans, demonstrates techniques, and responds to students’ needs, it does not systematically identify whether the instructor was military, civilian, or contractor. Further, internal surveys used to gather student feedback on courses do not include any specific questions requiring students to identify the type of instructor for the class. Students are able to provide their comments in the survey, which may include a discussion of the instructor, but there is no requirement for them to do so. As a result, TRADOC has no systematic method of compiling this data to determine the impact of instructor type on the quality of instruction. Another workforce management action TRADOC has taken is to use doctrine and training developers to serve as instructors. TRADOC’s quality assurance program evaluates whether instructors are teaching in accordance with what is in the curricula. However, when doctrine and training developers are being used as instructors, developers are not available to perform their primary task of developing, reviewing, and updating Army doctrine and curricula for TRADOC courses, which could affect the quality of training. Doctrine and curricula serve as the core for training at TRADOC schools, and developing, updating, and reviewing doctrine and curricula are critical to ensuring that TRADOC provides quality training. Doctrine, in the form of field manuals and other publications, establishes the foundation for how to think about operations and what to train soldiers on so that they can conduct operations. From doctrine, curricula are developed that provide a general description of course content, among other things. Army doctrine and curricula must complement one another so that soldiers are trained in accordance with guidance. Based on TRADOC guidance, doctrine should be reviewed at least every 18 months. According to a 2010 memorandum to the Department of the Army Headquarters, TRADOC was behind in integrating lessons learned, developing training, and updating doctrine. As a result, there is a substantial doctrine and training development backlog. During the time of our audit, TRADOC doctrine developers were working on 223 doctrinal and supporting publications. As of May 2011, TRADOC had a backlog of 436 man-years for doctrine development. Our analysis of TRADOC data shows that only 37 percent of the 447 doctrinal publications in TRADOC’s inventory at that time were current. The remaining doctrinal publications either needed to be developed, reviewed, or updated. Since doctrine guides what training is needed to enable soldiers to conduct operations, if it is not current, the quality of training provided to soldiers may be affected. TRADOC officials stated that as a best practice, curricula should be updated every 3 years. School officials stated that they try to update one- third of their curricula each year, but a number of factors, including using training developers to serve as instructors, have led to a backlog in updating curricula. As of April 2011, TRADOC had a backlog of 204 man- years for developing, updating, and reviewing 232 curricula that are considered critical to train soldiers on the necessary skills needed to perform their duties. In October 2010, TRADOC headquarters issued a tasking order to TRADOC schools requiring them to review and update the curricula for initial military training courses and to make necessary changes based on relevant and improved doctrine. No similar priority has been given for TRADOC’s schools to update other curricula. If curricula are not kept current, students could potentially not be trained on the most recent information, and this information is not being institutionalized for future instruction. For example, according to school officials, having updated curricula is important because instructors are required to teach the information contained in those curricula. According to TRADOC officials, most schools are allowing instructors, with approval from the head of the school, to deviate from the curricula so that they can incorporate current lessons learned and best practices from the field into class instruction. While this enables individual instructors to overcome outdated material in the curricula, there is no way to ensure that different instructors for the same course will choose to incorporate the same information in their instruction. As a result, TRADOC is unable to guarantee that all students in the same course receive the same quality of training they need to successfully perform their tasks. To remain a modern and capable fighting force, the Army needs a training system that can respond to changing national security needs while balancing competing demands for personnel. At the same time, the Department of Defense is facing internal fiscal pressures and emphasizing the need to find greater efficiencies across the Department and opportunities for cost savings. In this environment, it is important that TRADOC strengthen its approach for determining the appropriate number and mix of personnel to serve as instructors, training developers, and training support personnel to execute its training mission. Currently, certain key personnel requirements models used by TRADOC are out of date, and the command has not conducted an assessment to determine the right mix of personnel—military, civilian, and contractor—it needs to provide training. As a result of these limitations, TRADOC officials lack a sound basis for quantifying concerns they have raised about manning shortfalls among key personnel. Similarly, limitations in TRADOC’s quality assurance program make it difficult for TRADOC to evaluate the impact of the workforce management actions it has taken to meet increased student workload. At the same time, the decision to use doctrine and training development personnel as instructors has contributed to the backlog of doctrine and curricula that need to be updated. As a result, soldiers are not always receiving instruction on the most current and relevant information. Strengthening its approach to determining personnel requirements, setting priorities for updating doctrine and curricula, and assessing the impact of its workforce measures will enable TRADOC to make necessary adjustments and potentially achieve greater efficiencies, save costs, and maximize the use of training resources. To ensure that TRADOC is requesting the appropriate number and mix of personnel to serve as instructors, training developers, and training support personnel, we recommend that the Secretary of the Army direct TRADOC to take the following three actions:  Develop a plan with specific implementation milestones to update its personnel requirements models for training personnel, including (1) updating models for instructors and training developers and (2) developing models for field training and classroom setup personnel not covered in the training support personnel model, and adjust requirements accordingly.  Perform an assessment to determine the right mix of military, civilian, and contractor personnel needed to accomplish the training mission and make necessary adjustments to the current mix.  Establish metrics within its quality assurance program to enable TRADOC to evaluate how its workforce management actions, such as increasing reliance on contractors, affect the quality of training and use the data collected from these metrics to make adjustments to training as needed. To ensure that soldiers are being trained on the most current and recent information, we recommend that the Secretary of the Army direct TRADOC to establish a plan to enable TRADOC to develop, review, and update doctrine and curricula by (1) setting additional priority areas beyond initial military training on which doctrine and training developers should focus and (2) identifying timelines by which these reviews should be completed. In written comments on a draft of this report, DOD concurred with our four recommendations. The full text of DOD’s written comments is reprinted in appendix II. DOD concurred with our recommendation that the Secretary of the Army direct TRADOC to develop a plan with specific implementation milestones to update its personnel requirements models for training personnel and adjust requirements accordingly. In its comments, DOD stated that TRADOC is currently undertaking an in-depth review of instructor and training developer functions that will establish new staffing criteria for these personnel. This study, termed the Army Learning Concept 2015, is expected to be completed in the summer of 2012. According to DOD, the study will also determine manning requirements for field training. DOD added that development of a model for ammunition delivery/recovery mentioned in the report has been completed, and documentation for this new model is now being prepared for commandwide staffing to assist Army headquarters in revising manning models. In a follow-up discussion, DOD and TRADOC officials stated that they are currently updating their instructor and training developer models and they intend to incorporate the results of the Army Learning Concept 2015 review in subsequent updates of those models. The officials added that classroom setup and other training support tasks are a normal function of instructors and that these tasks will therefore be addressed in the instructor model. DOD concurred with our recommendation that the Secretary of the Army direct TRADOC to perform an assessment to determine the right mix of military, civilian, and contractor personnel needed to accomplish the training mission and make necessary adjustments to the current mix. DOD stated that because of the different standards and requirements for divergent courses, there is no single standard for a mix of cadre across TRADOC. DOD agreed that some type of study is needed and that TRADOC will conduct this analysis and include the results in its curricula. DOD further stated that TRADOC will also examine the potential to include an analysis of the optimum mix of instructors within the curricula for individual courses. According to DOD, this data would allow TRADOC to better articulate its true needs and to understand the potential to rebalance the existing instructors across courses in support of the new training load. DOD concurred with our recommendation that the Secretary of the Army direct TRADOC to establish metrics within its quality assurance program to enable TRADOC to evaluate how its workforce management actions— such as increasing reliance on contractors—impact the quality of training and use the data collected from these metrics to make adjustments to training as needed. In its comments, DOD stated that TRADOC will implement initiatives to develop metrics and collect data that will enable it to evaluate its workforce management actions while assisting TRADOC and Army headquarters in assessing training effectiveness. DOD added that establishing these metrics is contingent upon availability of resources and funding, noting that TRADOC’s quality assurance program must maintain the personnel required to collect this data as well as acquire statisticians to analyze the data for management decisions. If resourced to conduct this analysis, TRADOC anticipates developing the metrics by August 2012. We recognize that resources are needed to develop metrics to capture the impact of workforce management actions on the quality of training. However, in the absence of allocating resources to develop such metrics, TRADOC will continue to lack a sound basis for evaluating the impact of workforce management actions on the quality of training. As a result, TRADOC risks missing opportunities to make any necessary adjustments that could potentially enhance its ability to maximize the use of training resources. Finally, DOD concurred with our recommendation that the Secretary of the Army direct TRADOC to establish a plan to enable TRADOC to develop, review, and update doctrine and curricula by setting additional priority areas on which doctrine and training developers should focus and identifying timelines by which these reviews should be completed. DOD stated that priorities for updating TRADOC’s doctrine and curricula are established to meet operational requirements that change based on the needs of the force. DOD added that update requirements have accelerated for the past decade and that TRADOC has been working to reduce the backlog. According to DOD, TRADOC has taken certain steps including refining guidance and establishing a plan and time frames for updating doctrine. For example, DOD noted that the TRADOC Commanding General has refined his doctrine development guidance in his Doctrine 2015 strategy, which called for the doctrine development process to be faster and accessible to the force. DOD also stated that a transition plan for Doctrine 2015 and a plan for managing the execution of Doctrine 2015 are being developed. We view these actions as positive steps with respect to updating doctrine. As a result, we continue to believe that such a plan is needed to address the backlog in curricula development to ensure that curricula are kept current. Without such a plan, TRADOC risks soldiers not receiving instruction on the most current and relevant information. We are sending copies of this report to the Secretary of Defense, the Secretary of the Army, and the Commander of U.S. Army Training and Doctrine Command. In addition, this report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9619 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are James A. Reynolds (Assistant Director), Chaneé Gaskin, Brian Mateja, and Sonja Ware. As mandated by the House Armed Services Committee report accompanying a proposed bill for the fiscal year 2011 National Defense Authorization Act (H.R. 5136), we examined the extent to which U.S. Army Training and Doctrine Command (TRADOC) has (1) identified the number and type of personnel it needs for instructors, training developers, and training support personnel to carry out its training mission and (2) evaluated the impact of its workforce management actions on the quality of training. To determine the extent to which TRADOC has identified the number and type of personnel it needs to carry out its training mission, we focused our review on instructor, training developer, and training support personnel— personnel types that TRADOC officials identified as having key roles in executing the training mission. We met with Department of the Army Headquarters officials; TRADOC personnel; and operations, budget, and training officials at TRADOC headquarters and schools. We discussed the process for determining student workload and personnel requirements, establishing authorized personnel levels, and allocating personnel among Army commands resulting from the Total Army Analysis process. In addition, we held discussions and obtained documentation from the Army Manpower Analysis agency on the mathematical models and studies planned or completed by TRADOC to develop manpower estimating criteria. We also discussed the types of personnel (military, Army civilian, or contractor) who provided training to soldiers and the challenges associated with obtaining and using those types of personnel. We obtained and analyzed pertinent personnel and workload documentation to perform a trend analysis on personnel requirements, authorized personnel levels, and student workload from fiscal year 2005 through fiscal year 2011. Our analysis of personnel data compared the differences in required and authorized personnel for fiscal years 2005 through 2011. We focused our analysis on that time frame because we were able to obtain more complete information from these years. In addition, we reviewed curricula used to provide training to soldiers to determine the type of information included in the curricula and whether this information was current. We also examined relevant Department of Defense (DOD), Army, and TRADOC guidance, including DOD’s policy and procedures for determining workforce mix, the Army’s manpower management guidance, and TRADOC’s Systems Approach to Training. Finally, we reviewed previous reports issued by GAO and the U.S. Army Audit Agency on personnel requirements. To determine the extent to which TRADOC has evaluated the impact of workforce management actions it has taken to execute its training mission on the quality of training provided, we met with TRADOC personnel and operations and training officials, including quality assurance, doctrine, and training development officials, at TRADOC schools and headquarters. At the TRADOC schools we visited, we discussed the quality assurance instruments used to measure the quality of training provided, time frames for conducting the evaluations, and information gained from them. In addition, we collected examples of surveys, survey results, and accreditation summary reports. At TRADOC headquarters, we discussed and obtained documentation related to the Command’s ability to measure mission effectiveness. We discussed the Command’s accreditation process and TRADOC Headquarters’ involvement in its schools’ quality review processes. At both TRADOC Headquarters and TRADOC schools, we discussed challenges associated with developing, reviewing, and updating the doctrine and curricula used to provide training. Additionally, we obtained data on TRADOC’s doctrine and training development workload. We reviewed and analyzed the data to determine what percentage of the data was current and what percentage needed to be developed, reviewed, or updated. We obtained data to show a trend in doctrine and training development backlogs from fiscal years 2007 through 2010. We focused our analysis on that time frame because we were able to obtain more complete information from these years. Finally, we obtained and reviewed relevant TRADOC guidance on conducting quality review assessments and developing, reviewing, and updating doctrine and curricula. We visited seven schools that were identified by TRADOC Headquarters officials and in the TRADOC Commander’s 2010 memorandum as being representative of TRADOC’s challenges in providing training, such as having high student workload or using a large number of contractors. We conducted work at the following schools:  United States Army Aviation Logistics School, Fort Eustis, Virginia;  Maritime Transportation School, Fort Eustis, Virginia;  Aviation Center of Excellence, Fort Rucker, Alabama; Infantry School, Fort Benning, Georgia;  Military Police School, Fort Leonardwood, Missouri; Intelligence Center of Excellence, Fort Huachuca, Arizona; and  Signals Center of Excellence, Fort Gordon, Georgia. We also conducted work at the following locations:  Department of the Army Headquarters, Pentagon, Arlington, Virginia;  TRADOC Headquarters, Fort Monroe, Virginia;  Combined Arms Command, Fort Leavenworth, Kansas;  Army Manpower Analysis Agency, Fort Belvior, Virginia; and  Army Human Resources Command, Fort Knox, Kentucky. We conducted this performance audit from August 2010 to September 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Military Training: Army and Marine Corps Face Challenges to Address Projected Future Requirements. GAO-10-720. Washington, D.C.: July 16, 2010. Military Readiness: Navy Needs to Reassess Its Metrics and Assumptions for Ship Crewing Requirements and Training. GAO-10-592. Washington, D.C.: June 9, 2010. Military Training: Actions Needed to Further Improve the Consistency of Combat Skills Training Provided to Army and Marine Corps Support Forces. GAO-10-465. Washington, D.C.: April 16, 2010. Reserve Forces: Army Needs to Reevaluate Its Approach to Training and Mobilizing Reserve Component Forces. GAO-09-720. Washington, D.C.: July 17, 2009. Military Training: Actions Needed to More Fully Develop the Army’s Strategy for Training Modular Brigades and Address Implementation Challenges. GAO-07-936. Washington, D.C.: August 6, 2007. Human Capital: A Guide for Assessing Strategic Training and Development Efforts in the Federal Government. GAO-04-546G. Washington, D.C.: March 2004. Defense Management: Army Needs to Address Resource and Mission Requirements Affecting Its Training and Doctrine Command. GAO-03-214. Washington, D.C.: February 10, 2003.
To support ongoing operations, the Army gives priority to providing personnel to its operating forces over its support organizations, including Training and Doctrine Command (TRADOC).TRADOC performs various functions, such as developing warfighting doctrine and providing training. To help manage its workforce, TRADOC has taken certain actions, such as relying more on contractors and reassigning other staff to be instructors. In a February 2010 memorandum, the TRADOC Commander stated that because of various factors TRADOC's ability to successfully perform its core competencies and functions was increasingly at risk. House Armed Services Committee report 111-491 directed GAO to evaluate the availability of Army trainers. GAO assessed the extent to which TRADOC has (1) identified the number and type of personnel needed to carry out its training mission and (2) evaluated the impact of its workforce management actions on the quality of training. GAO interviewed key Army and TRADOC officials and reviewed relevant doctrine, guidance, curricula, personnel requirements data, and training survey results. TRADOC annually determines its requirements for key training positions, but limitations exist in its underlying approach, such as the use of outdated personnel requirements models. From fiscal years 2005 through 2011, TRADOC's requirements for instructors, training developers, and training support personnel have remained relatively steady while the student workload has increased by about a third. To determine personnel requirements, TRADOC uses various models involving formulas that rely on a range of assumptions and inputs. Army guidance requires Army commands to update models at least every 3 years, but TRADOC has not updated its model for determining the number of instructors it needs since 1998. As a result, assumptions and inputs used in the model may not reflect changes in how training is currently provided, such as the greater use of self-paced computerized learning in place of classroom instruction. Such changes could affect the number of instructors required to teach a course. In addition, TRADOC has used the same number, with minor modifications, for training developer requirements for the last 3 fiscal years. TRADOC officials recognize that using the same number for training developer requirements is not a valid approach and that an updated model is needed; however, they are unsure when they will be able to update the model. Lastly, TRADOC has not conducted an assessment to determine the optimum mix of military, Army civilian, and contractor personnel to use to execute its training mission. Without the benefit of models that are updated to more closely reflect current training conditions and without conducting a mix analysis, TRADOC does not have a sound basis for accurately identifying the number and types of personnel needed for key training personnel and making the most cost-effective use of training resources. TRADOC has taken various workforce management actions in order to execute its training mission, but its quality assurance program does not collect certain information needed to evaluate the impact of these actions on the quality of training. Among other things, TRADOC has increased the number of students that an instructor teaches, relied on more contractors as instructors, and reassigned doctrine and training developers to serve as instructors. Through surveys and other tools, TRADOC evaluates factors such as students' knowledge of course materials and whether an instructor is teaching from the curriculum, but it does not systematically collect the data needed to evaluate the impact of changing the student to instructor ratio or the type of instructor on the quality of training. TRADOC officials expressed mixed views about the impact of using contractors on the quality of training. Some believed that more military trainers are needed because these personnel have the knowledge and credibility gained from combat experience to teach soldiers. While others stated that contractors provide the same quality of training as military personnel. GAO noted that TRADOC's use of doctrine and training developers to serve as instructors is among the factors that have led to a backlog in updating doctrine and curricula, which could affect the quality of training. GAO recommends that TRADOC establish a plan to (1) update its personnel requirements models, doctrine, and curricula; (2) complete a personnel mix assessment; and (3) establish metrics to evaluate its workforce management actions. DOD concurred with the recommendations.
Since the end of the Cold War, the Navy has emphasized a strategy of littoral warfare. As part of this strategy, the Navy and the Marine Corps have been developing operational concepts for amphibious warfare, which rely heavily on the ability to launch and support amphibious assaults from ships up to 25 nautical miles from the enemy’s shore. According to the Navy and the Marine Corps, to successfully conduct amphibious operations, the Marine Corps requires all-weather fire support. If artillery and other ground-based fire support assets are not available, Marine Corps ground forces will need long-range fire support from Navy surface ships or from attack helicopters and fixed-wing aircraft. Currently, the Navy operates the 5-inch, 54-caliber gun on cruisers and destroyers, which can fire unguided projectiles a maximum range of about 13 nautical miles. According to the Navy and the Marine Corps, this short range combined with threats to surface ships from mines and antiship missiles currently preclude the Navy from adequately supporting Marine Corps amphibious operations or engaging other long-range targets. The Congress has been interested in the Navy’s plans for NSFS since 1991. The National Defense Authorization Act for Fiscal Years 1992 and 1993 required (1) the Secretary of the Navy to provide a report to the Congress outlining NSFS requirements and survey alternative technologies and other options that could meet these requirements; (2) the Secretary of Defense, through the Institute for Defense Analysis, to provide a study of naval ship-to-shore fire support requirements and cost-effective alternatives; and (3) the Navy to conduct a cost and operational effectiveness analysis (COEA) based on the requirements and technologies identified in the first report. In the conference report to the National Defense Authorization Act for Fiscal Year 1995, the Congress required the Secretary of the Navy to submit a report on the Navy’s NSFS plan. At the time of this review, this report has not been submitted to the Congress. In February 1993, the Center for Naval Analyses began the COEA. It evaluated the performance of 10 existing and candidate 5- and 8-inch and 155-millimeter gun systems with different propellants, flight classifications, and warhead types against target sets for three scenarios, two of which represented major regional conflicts. The third scenario represented a noncombatant evacuation operation. The Navy also evaluated seven missile concepts against these scenarios because it found that none of the gun systems could handle all of the target sets. The scenarios and target sets were developed along with the Marine Corps and validated by the COEA’s oversight board. The COEA identified eight gun systems that, when combined with missiles, were capable of attacking at least 95 percent of the targets in the major regional conflict scenarios at the lowest total estimated cost. Five of these systems were 155-millimeter variants, and three were 8-inch variants with different propellants and calibers. The COEA concluded that a 155-millimeter, 60-caliber gun system with an advanced propellant and precision-guided munitions in combination with the Tomahawk missile was the most cost-effective NSFS option. According to the Navy, the only 5-inch gun candidate that was able to compete with other gun systems modeled in the COEA was a 5-inch, 70-caliber Magnum gun. This gun does not exist and would have to be developed. The COEA found that, for both major regional conflict scenarios, fewer 155-millimeter munitions and long-range missiles would be needed to hit a majority of the target sets than 5-inch, 70-caliber munitions and missiles. For example, the Navy could hit 99 percent of the targets in one scenario with 1,316 fewer 155-millimeter projectiles, and 34 fewer long-range missiles at a wartime cost of about $69 million less than with a combination of 5-inch, 70-caliber projectiles and missiles. Also, the COEA stated that, if the NSFS program became fiscally constrained, development of a 5-inch, 70-caliber gun might save money in the near term, making it an attractive option because of lower research and development costs, but (1) wartime costs would be considerably higher than with larger guns and (2) a 5-inch, 70-caliber gun would not adequately cover the targets. The Navy subsequently developed the NSFS program based on the results of the COEA. In March 1994, the Navy proposed (1) developing a new 155-millimeter, 60-caliber gun; (2) developing, along with the Army, a new 155-millimeter precision-guided munition; and (3) researching different propellants, including electro-thermal-chemical and liquid propellants. The Navy planned to field these new systems by fiscal year 2003. The Navy also proposed providing limited upgrades to existing 5-inch guns to achieve greater ranges until the 155-millimeter gun became available and planned to conduct concept demonstrations of various missiles. According to the Navy, the NSFS program had the potential for joint development of various propellants and commonality with Army 155-millimeter munitions. To fund this overall program, the Navy included $360 million for research and development in its proposed Future Years Defense Program for fiscal years 1996-2001 and expected to field the 155-millimeter gun in fiscal year 2003 on new-production DDG-51 destroyers or on a follow-on surface ship, known as SC-21. Funding shortfalls in the Navy’s fiscal year 1996 program objective memorandum led to a decision by the Navy to cut its NSFS program in August 1994 to help pay for programs that the Marine Corps considered vital to its amphibious capabilities. These programs included the V-22 medium-lift aircraft and the Advanced Amphibious Assault Vehicle. According to program officials, to stay within the reduced funding level, the Navy canceled plans to develop the 155-millimeter, 60-caliber gun and the 155-millimeter precision-guided munition and scaled back efforts to develop advanced propellants for 155-millimeter munitions. The Navy said it would consider this option as a long-term NSFS solution as it develops its new surface combatant ship, the SC-21. In the interim, the Navy has decided to upgrade its existing 5-inch, 54-caliber guns and develop a 5-inch precision-guided munition. According to program officials, the Navy made this decision primarily because it believed that modifying existing guns would be the quickest way to gain better gun capability at the least cost. In December 1994, the Chief of Naval Operations approved the Navy’s revised NSFS plan, and in January 1995, directed the Naval Sea Systems Command to (1) initiate upgrades to the 5-inch, 54-caliber gun to deliver precision-guided munitions; (2) develop a 5-inch precision-guided munition with an initial operational capability before fiscal year 2001; and (3) scale back liquid propellant gun technology efforts. In addition, the Chief of Naval Operations directed that no funds be used to develop the 155-millimeter gun. According to the Navy, it will need about $246 million in research and development funds between fiscal years 1996 and 2001 for the revised NSFS program. About $165 million will be required to develop the precision-guided munition, $56 million to upgrade the 5-inch gun, and $25 million will be needed for research and development on NSFS-related command and control systems. The Navy included $160.2 million in its Future Years Defense Program for fiscal years 1996-2001 for research and development of the 5-inch gun and precision-guided munition, including $12 million for fiscal year 1996. As a result, the Navy’s research and development program is underfunded by about $86 million. Navy officials told us that funds would be added to the program in fiscal year 1997. In November 1994, 3 months after the Navy proposed the 5-inch, 54-caliber gun solution, the Marine Corps established a range requirement for NSFS that is less than the range requirements assumed in the COEA. Although the COEA does not specify a range requirement, the COEA assumed that a majority of the NSFS targets in the major regional conflict scenarios were located within 75 nautical miles of the fire support ship. This requirement was consistent with the findings of the July 1992 Navy NSFS requirements study and the June 1993 Institute for Defense Analysis study, which found that 75 nautical miles was the maximum required range to support the Marine Corps’ operational concepts. Although range estimates for an upgraded 5-inch, 54-caliber gun vary, all estimates are less than 75 nautical miles. The June 1993 Institute for Defense Analysis study estimated that an advanced 5-inch gun projectile with rocket-assisted propulsion could achieve a range between 45 and 65 nautical miles. Navy officials told the Chief of Naval Operations that an upgraded 5-inch gun could achieve ranges between 45 and 70 nautical miles depending on the scope of the upgrade and the type of propellant used in the precision-guided munition. According to the Navy, to achieve a 70 nautical mile range, electro-thermal-chemical propellants may be needed, but these propellants have not yet been developed. In November 1994, the Marine Corps established a requirement for NSFS in terms of range, volume of fire, and lethality. Although it participated in developing the original 75 nautical mile range target assumption used in the COEA, the Marine Corps decided that the minimum range requirement for NSFS should be 41.3 nautical miles and that the maximum range should be 63.1 nautical miles. The Marine Corps based these ranges on its intent to use NSFS during the initial stages of an amphibious operation until artillery is ashore. Because its 155-millimeter towed artillery would be unavailable during the initial stages of an amphibious operation, the Marine Corps concluded that NSFS, at a minimum, must provide the same range, lethality, and accuracy as current artillery systems. The minimum 41.3 nautical mile range consists of the 25 nautical mile ship-to-shore distance plus a 16.3 nautical mile (30 kilometers) distance representing the maximum range of existing Marine Corps 155-millimeter artillery with rocket-assisted projectiles. To derive the maximum range of 63.1 nautical miles, the Marine Corps used the accepted minimum range for threat artillery articulated in the Army Field Artillery COEA of 21.8 nautical miles (40 kilometers) and added this range to the minimum range of 41.3 nautical miles. The Marine Corps’ intent to use NSFS during the initial stages of amphibious landing operations was outlined in the NSFS mission needs statement, which was signed by the Navy in May 1992. According to the statement, NSFS also involves suppressing and destroying hostile antiship weapons and air defense systems, delaying and disrupting enemy movements, and reinforcing defending forces. Marine Corps and Navy requirements officials also told us that the Marine Corps revised the 75 nautical mile range requirement because it was not logical, specifically defined, or formally agreed to by the Navy or the Marine Corps. We found this surprising because Navy and Marine Corps officials were involved in developing the target sets used in the COEA’s scenarios. The scenarios and target sets were also approved by officials from both services serving on the COEA’s oversight board. The fact that the Navy and the Marine Corps established the new range requirement after the Navy completed work on the COEA and restructured the program raises questions about the validity of NSFS range requirements. The Marine Corps did not assess the impact of its new requirement on the target sets originally developed for the COEA or conduct any further analysis to validate these ranges. Therefore, the importance to the NSFS mission of targets located between 63 and 75 nautical miles from the ship is not clear. According to defense acquisition management policies and procedures, a COEA is intended to assist decisionmakers in choosing the best system alternative for the money invested and not to justify decisions that have already been made. The Navy did not perform a supplemental analysis to its original COEA before it decided to restructure the NSFS program. The Navy is currently conducting a supplemental analysis to evaluate near-term alternatives for NSFS. According to the Navy, this analysis will reflect the new Marine Corps’ maximum range requirement of 63.1 nautical miles and be limited only to 5-inch gun options. The Navy has asked the Center for Naval Analyses to complete this analysis by May 1995. It is not clear whether a supplemental analysis that considered all gun options—5 and 8 inch and 155 millimeter—against the Marine Corps’ new distance requirements would support the Navy’s decision to upgrade the 5-inch gun because (1) larger guns firing advanced projectiles with more payload can attack more targets than smaller, 5-inch guns and (2) the original COEA found that the rankings of the eight most cost-effective systems were not sensitive to range. The original COEA assessed the effectiveness of the eight most cost-effective systems when the ship-to-shore distance was reduced from 25 to 5 nautical miles and found that the cost-effectiveness rankings of the systems remained basically the same. Even at shorter ranges, the 155-millimeter, 60-caliber gun and Tomahawk missile combination remained the most cost-effective NSFS option. The Congress may wish to consider not authorizing or appropriating fiscal year 1996 funds for NSFS until the Navy has (1) determined and validated NSFS requirements and (2) conducted a comprehensive supplemental analysis to the COEA that includes all available gun and missile alternatives. The Department of Defense (DOD) did not concur with either the thrust of this report or the matter for congressional consideration (see app. II). DOD took issue with three major issues in the report: the Marine Corps’ range requirement, the Navy’s long-term plans for the 155-millimeter gun, and our suggestion that the Navy is revising the COEA to justify decisions it had already made. DOD noted that the report incorrectly alludes to a Marine Corps initial NSFS requirement of 75 nautical miles. DOD said that the minimum 41.3 and maximum 63.1 nautical mile ranges established by the Marine Corps in November 1994 was the first explicit statement of the requirement based on a practical analysis of war-fighting scenarios. We do not agree with DOD’s position. Although the COEA did not include a specific range requirement, a majority of the targets in the major regional conflict scenarios modeled by the COEA were located within 75 nautical miles of the fire support ship. The 75 nautical mile range was consistent with the findings of the July 1992 Navy NSFS requirements study and the June 1993 Institute for Defense Analysis study, which found that 75 nautical miles was the maximum required range to support the Marine Corps’ operational concepts. Further, the Navy did not conduct an analysis to validate the relationship between the target set used in developing the COEA and the Marine Corps’ new maximum range requirement of 63.1 nautical miles. Also, it should be noted that the original COEA found that the rankings of the eight most cost-effective systems were not sensitive to range. The original COEA assessed the effectiveness of the eight most cost-effective systems when the ship-to-shore distance was reduced from 25 to 5 nautical miles and found that the cost-effectiveness rankings of the systems remained basically the same. Even at shorter ranges, the 155-millimeter, 60-caliber gun and Tomahawk missile combination remained the most cost-effective NSFS option. DOD said that plans to develop the 155-millimeter gun and precision-guided projectile, as recommended in the COEA, have not been canceled and that this system remains a viable option for inclusion on the SC-21. This differs sharply from what Navy officials told us during the audit. Moreover, no funds have been budgeted for this program in the Future Years Defense Program for fiscal years 1996-2001. Also, in his December 1994 decision to focus on the 5-inch gun upgrade program, the Chief of Naval Operations directed that no funds be used to develop the 155-millimeter gun. DOD said that the Navy was not revising its COEA but was conducting a supplemental analysis to the original NSFS COEA. DOD noted that the purpose of the supplemental analysis was to determine the best near-term NSFS improvements to meet the range requirements established by the Marine Corps in November 1994. However, we note the Navy requested the Center for Naval Analyses to perform the supplemental analysis 2 months after its decision to proceed with the restructured program. Because the Navy has restricted the supplemental analysis to only 5-inch gun solutions, rather than all potential gun solutions, we believe that the supplemental analysis may not determine the most cost-effective, near-term NSFS program. Our recent discussions with officials from the Center for Naval Analyses who are conducting the supplemental analysis has reinforced this view. According to these officials, the 5-inch precision-guided munition development program is a high-risk endeavor that requires concurrent development of a number of new technologies. One risk associated with concurrency is that fielding of the munition may be delayed beyond the year 2001. According to the Center for Naval Analyses, another risk is that the 5-inch munition may not be able to meet the Marine Corps’ maximum range requirement. DOD also disagreed with the matter for congressional consideration. DOD noted that its near-term program was consistent with the 1993 Institute for Defense Analysis study, which recommended developing advanced projectiles compatible with existing 5-inch, 54-caliber guns for the near term and that sufficient analysis has been conducted for the Navy to proceed with its program. DOD also stated that removal of fiscal year 1996 funding would slow the achievement of both near- and long-term objectives. From the outset, the Navy intended to use the COEA to determine the best program for NSFS. We continue to believe the Navy has not conducted sufficient analysis to support its near-term program. To obtain information on NSFS requirements and the Navy’s plans, we interviewed officials and reviewed documents from the Office of the Deputy Chief of Naval Operations for Resources, Warfare Requirements, and Assessments and the Office of the Assistant Secretary of the Navy for Research, Development, and Acquisition, Washington, D.C. We also interviewed officials and reviewed documents at the Marine Corps Combat Development Command, Quantico, Virginia; and the Naval Sea Systems Command, Crystal City, Virginia. We reviewed the Navy and the Office of the Secretary of Defense NSFS studies mandated by the Congress in the National Defense Authorization Act for Fiscal Years 1992 and 1993 and discussed them with Navy officials and representatives of the Institute for Defense Analysis, Alexandria, Virginia. The Navy did not provide us with a copy of the COEA, but we reviewed the COEA’s summary report dated March 31, 1994, which contained its major findings and conclusions. We discussed the COEA with officials of the Center for Naval Analyses, Alexandria, Virginia. We conducted our review between July 1993 and March 1995 in accordance with generally accepted government auditing standards. We are sending copies of this letter to the Secretaries of Defense and the Navy and the Commandant of the Marine Corps. We will also make copies available to others on request. Please contact me at (202) 512-3504 if you or your staff have any questions concerning this report. Major contributors to this report are Richard Price, Assistant Director; Anton Blieberger, Evaluator-in-Charge; and Robert Goldberg, Senior Evaluator. National Defense Authorization Act for Fiscal Years 1992 and 1993 mandates the Navy and the Office of the Secretary of Defense to assess naval surface fire support (NSFS) needs and the Navy to conduct a formal cost and operational effectiveness analysis (COEA). The Navy signs the NSFS mission needs statement. The Navy issues its first congressionally mandated report on NSFS requirements. The Navy begins the COEA. The Institute for Defense Analysis completes its assessment of NSFS. The Navy completes its work on the COEA and, on the basis of its results, proposes an NSFS program and funding in its Future Years Defense Program for fiscal years 1996-2001. The Navy restructures the NSFS program in light of funding shortfalls and cancels 155-millimeter, 60-caliber gun development. The Marine Corps identifies NSFS range requirements. The COEA is signed out for distribution by the Co-Chairs of COEA oversight board, but is not released to the Congress. The Navy proposes a revised NSFS program to the Chief of Naval Operations and obtains approval. The Chief of Naval Operations formally approves the NSFS range requirement and issues formal program guidance directing the Navy to pursue upgrades to 5-inch guns and development of a precision-guided munition. The Navy asks the Center for Naval Analyses to provide a supplemental analysis to its original COEA that reflects the Marine Corps’ new range requirements by May 1995. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Navy's upgrade of its surface ships' guns to determine whether the Navy has chosen the most cost-effective system for improving naval surface fire support (NSFS). GAO found that: (1) the Navy did not sufficiently analyze its needs before deciding on the upgrade of its 5-inch, 54-caliber guns and the development of a 5-inch precision-guided munition; (2) the Navy determined that the most cost-effective system to meet NSFS needs by fiscal year (FY) 2003 would be a 155-millimeter, 60-caliber gun with an advanced propellant and precision-guided munitions in combination with the Tomahawk Land Attack Missile; (3) although it initially proposed to develop the guns at a cost of about $360 million, the Navy has decided to limit the program to upgrading existing guns and developing precision-guided munitions to meet the reduced funding level; (4) the Navy estimates that research and development (R&D) costs for the 5-inch guns will be about $246 million; (5) the Navy R&D budget has a $86-million shortfall that will be corrected in FY 1997; (6) the Marine Corps has revised its minimum NSFS range requirement to reflect the Navy's restructured gun program; and (7) the Navy is conducting a supplemental analysis to evaluate near-term alternatives for NSFS, but it is unclear whether this analysis will support the Navy's decision to upgrade the 5-inch gun.
Because of such emergencies as natural disasters, hazardous material spills, and riots, all levels of government have had some experience in preparing for different types of disasters and emergencies. Preparing for all potential hazards is commonly referred to as the “all-hazards” approach. While terrorism is a component within an all-hazards approach, terrorist attacks potentially impose a new level of fiscal, economic, and social dislocation within this nation’s boundaries. Given the specialized resources that are necessary to address a chemical or biological attack, the range of governmental services that could be affected, and the vital role played by private entities in preparing for and mitigating risks, state and local resources alone will likely be insufficient to meet the terrorist threat. Some of these specific challenges can be seen in the area of bioterrorism. For example, a biological agent released covertly might not be recognized for a week or more because symptoms may only appear several days after the initial exposure and may be misdiagnosed at first. In addition, some biological agents, such as smallpox, are communicable and can spread to others who were not initially exposed. These characteristics require responses that are unique to bioterrorism, including health surveillance, epidemiologic investigation, laboratory identification of biological agents, and distribution of antibiotics or vaccines to large segments of the population to prevent the spread of an infectious disease. The resources necessary to undertake these responses are generally beyond state and local capabilities and would require assistance from and close coordination with the federal government. National preparedness is a complex mission that involves a broad range of functions performed throughout government, including national defense, law enforcement, transportation, food safety and public health, information technology, and emergency management, to mention only a few. While only the federal government is empowered to wage war and regulate interstate commerce, state and local governments have historically assumed primary responsibility for managing emergencies through police, firefighters, and emergency medical personnel. The federal government’s role in responding to major disasters is generally defined in the Stafford Act, which requires a finding that the disaster is so severe as to be beyond the capacity of state and local governments to respond effectively before major disaster or emergency assistance from the federal government is warranted. Once a disaster is declared, the federal government—through the Federal Emergency Management Agency (FEMA)—may reimburse state and local governments for between 75 and 100 percent of eligible costs, including response and recovery activities. There has been an increasing emphasis over the past decade on preparedness for terrorist events. After the nerve gas attack in the Tokyo subway system on March 20, 1995, and the Oklahoma City bombing on April 19, 1995, the United States initiated a new effort to combat terrorism. In June 1995, Presidential Decision Directive 39 was issued, enumerating responsibilities for federal agencies in combating terrorism, including domestic terrorism. Recognizing the vulnerability of the United States to various forms of terrorism, the Congress passed the Defense Against Weapons of Mass Destruction Act of 1996 (also known as the Nunn-Lugar- Domenici program) to train and equip state and local emergency services personnel who would likely be the first responders to a domestic terrorist event. Other federal agencies, including those in the Department of Justice, Department of Energy, FEMA, and Environmental Protection Agency, have also developed programs to assist state and local governments in preparing for terrorist events. The attacks of September 11, 2001, as well as the subsequent attempts to contaminate Americans with anthrax, dramatically exposed the nation’s vulnerabilities to domestic terrorism and prompted numerous legislative proposals to further strengthen our preparedness and response. During the first session of the 107th Congress, several bills were introduced with provisions relating to state and local preparedness. For instance, the Preparedness Against Domestic Terrorism Act of 2001, which you cosponsored, Mr. Chairman, proposes the establishment of a Council on Domestic Preparedness to enhance the capabilities of state and local emergency preparedness and response. The funding for homeland security increased substantially after the attacks. According to documents supporting the president’s fiscal year 2003 budget request, about $19.5 billion in federal funding for homeland security was enacted in fiscal year 2002. The Congress added to this amount by passing an emergency supplemental appropriation of $40 billion dollars. According to the budget request documents, about one- quarter of that amount, nearly $9.8 billion, was dedicated to strengthening our defenses at home, resulting in an increase in total federal funding on homeland security of about 50 percent, to $29.3 billion. Table 1 compares fiscal year 2002 funding for homeland security by major categories with the president’s proposal for fiscal year 2003. We have tracked and analyzed federal programs to combat terrorism for many years and have repeatedly called for the development of a national strategy for preparedness. We have not been alone in this message; for instance, national commissions, such as the Gilmore Commission, and other national associations, such as the National Emergency Management Association and the National Governors Association, have advocated the establishment of a national preparedness strategy. The attorney general’s Five-Year Interagency Counterterrorism Crime and Technology Plan, issued in December 1998, represents one attempt to develop a national strategy on combating terrorism. This plan entailed a substantial interagency effort and could potentially serve as a basis for a national preparedness strategy. However, we found it lacking in two critical elements necessary for an effective strategy: (1) measurable outcomes and (2) identification of state and local government roles in responding to a terrorist attack. In October 2001, the president established the Office of Homeland Security as a focal point with a mission to develop and coordinate the implementation of a comprehensive national strategy to secure the United States from terrorist threats or attacks. While this action represents a potentially significant step, the role and effectiveness of the Office of Homeland Security in setting priorities, interacting with agencies on program development and implementation, and developing and enforcing overall federal policy in terrorism-related activities is in the formative stages of being fully established. The emphasis needs to be on a national rather than a purely federal strategy. We have long advocated the involvement of state, local, and private-sector stakeholders in a collaborative effort to arrive at national goals. The success of a national preparedness strategy relies on the ability of all levels of government and the private sector to communicate and cooperate effectively with one another. To develop this essential national strategy, the federal role needs to be considered in relation to other levels of government, the goals and objectives for preparedness, and the most appropriate tools to assist and enable other levels of government and the private sector to achieve these goals. Although the federal government appears monolithic to many, in the area of terrorism prevention and response, it has been anything but. More than 40 federal entities have a role in combating and responding to terrorism, and more than 20 federal entities in bioterrorism alone. One of the areas that the Office of Homeland Security will be reviewing is the coordination among federal agencies and programs. Concerns about coordination and fragmentation in federal preparedness efforts are well founded. Our past work, conducted prior to the creation of the Office of Homeland Security, has shown coordination and fragmentation problems stemming largely from a lack of accountability within the federal government for terrorism-related programs and activities. There had been no single leader in charge of the many terrorism- related functions conducted by different federal departments and agencies. In fact, several agencies had been assigned leadership and coordination functions, including the Department of Justice, the Federal Bureau of Investigation, FEMA, and the Office of Management and Budget. We previously reported that officials from a number of agencies that combat terrorism believe that the coordination roles of these various agencies are not always clear. The recent Gilmore Commission report expressed similar concerns, concluding that the current coordination structure does not provide the discipline necessary among the federal agencies involved. In the past, the absence of a central focal point resulted in two major problems. The first of these is a lack of a cohesive effort from within the federal government. For example, the Department of Agriculture, the Food and Drug Administration, and the Department of Transportation have been overlooked in bioterrorism-related policy and planning, even though these organizations would play key roles in response to terrorist acts. In this regard, the Department of Agriculture has been given key responsibilities to carry out in the event that terrorists were to target the nation’s food supply, but the agency was not consulted in the development of the federal policy assigning it that role. Similarly, the Food and Drug Administration was involved with issues associated with the National Pharmaceutical Stockpile, but it was not involved in the selection of all items procured for the stockpile. Further, the Department of Transportation has responsibility for delivering supplies under the Federal Response Plan, but it was not brought into the planning process and consequently did not learn the extent of its responsibilities until its involvement in subsequent exercises. Second, the lack of leadership has resulted in the federal government’s development of programs to assist state and local governments that were similar and potentially duplicative. After the terrorist attack on the federal building in Oklahoma City, the federal government created additional programs that were not well coordinated. For example, FEMA, the Department of Justice, the Centers for Disease Control and Prevention, and the Department of Health and Human Services all offer separate assistance to state and local governments in planning for emergencies. Additionally, a number of these agencies also condition receipt of funds on completion of distinct but overlapping plans. Although the many federal assistance programs vary somewhat in their target audiences, the potential redundancy of these federal efforts warrants scrutiny. In this regard, we recommended in September 2001 that the president work with the Congress to consolidate some of the activities of the Department of Justice’s Office for State and Local Domestic Preparedness Support under FEMA. State and local response organizations believe that federal programs designed to improve preparedness are not well synchronized or organized. They have repeatedly asked for a one-stop “clearinghouse” for federal assistance. As state and local officials have noted, the multiplicity of programs can lead to confusion at the state and local levels and can expend precious federal resources unnecessarily or make it difficult for them to identify available federal preparedness resources. As the Gilmore Commission report notes, state and local officials have voiced frustration about their attempts to obtain federal funds and have argued that the application process is burdensome and inconsistent among federal agencies. Although the federal government can assign roles to federal agencies under a national preparedness strategy, it will also need to reach consensus with other levels of government and with the private sector about their respective roles. Clearly defining the appropriate roles of government may be difficult because, depending upon the type of incident and the phase of a given event, the specific roles of local, state, and federal governments and of the private sector may not be separate and distinct. A new warning system, the Homeland Security Advisory System, is intended to tailor notification of the appropriate level of vigilance, preparedness, and readiness in a series of graduated threat conditions. The Office of Homeland Security announced the new warning system on March 12, 2002. The new warning system includes five levels of alert for assessing the threat of possible terrorist attacks: low, guarded, elevated, high, and severe. These levels are also represented by five corresponding colors: green, blue, yellow, orange, and red. When the announcement was made, the nation stood in the yellow condition, in elevated risk. The warning can be upgraded for the entire country or for specific regions and economic sectors, such as the nuclear industry. The system is intended to address a problem with the previous blanket warning system that was used. After September 11th, the federal government issued four general warnings about possible terrorist attacks, directing federal and local law enforcement agencies to place themselves on the “highest alert.” However, government and law enforcement officials, particularly at the state and local levels, complained that general warnings were too vague and a drain on resources. To obtain views on the new warning system from all levels of government, law enforcement, and the public, the United States Attorney General, who will be responsible for the system, provided a 45-day comment period from the announcement of the new system on March 12th. This provides an opportunity for state and local governments as well as the private sector to comment on the usefulness of the new warning system, and the appropriateness of the five threat conditions with associated suggested protective measures. Numerous discussions have been held about the need to enhance the nation’s preparedness, but national preparedness goals and measurable performance indicators have not yet been developed. These are critical components for assessing program results. In addition, the capability of state and local governments to respond to catastrophic terrorist attacks is uncertain. At the federal level, measuring results for federal programs has been a longstanding objective of the Congress. The Congress enacted the Government Performance and Results Act of 1993 (commonly referred to as the Results Act). The legislation was designed to have agencies focus on the performance and results of their programs rather than on program resources and activities, as they had done in the past. Thus, the Results Act became the primary legislative framework through which agencies are required to set strategic and annual goals, measure performance, and report on the degree to which goals are met. The outcome-oriented principles of the Results Act include (1) establishing general goals and quantifiable, measurable, outcome-oriented performance goals and related measures, (2) developing strategies for achieving the goals, including strategies for overcoming or mitigating major impediments, (3) ensuring that goals at lower organizational levels align with and support general goals, and (4) identifying the resources that will be required to achieve the goals. A former assistant professor of public policy at the Kennedy School of Government, now the senior director for policy and plans with the Office of Homeland Security, noted in a December 2000 paper that a preparedness program lacking broad but measurable objectives is unsustainable. This is because it deprives policymakers of the information they need to make rational resource allocations, and program managers are prevented from measuring progress. He recommended that the government develop a new statistical index of preparedness,incorporating a range of different variables, such as quantitative measures for special equipment, training programs, and medicines, as well as professional subjective assessments of the quality of local response capabilities, infrastructure, plans, readiness, and performance in exercises. Therefore, he advocated that the index should go well beyond the current rudimentary milestones of program implementation, such as the amount of training and equipment provided to individual cities. The index should strive to capture indicators of how well a particular city or region could actually respond to a serious terrorist event. This type of index, according to this expert, would then allow the government to measure the preparedness of different parts of the country in a consistent and comparable way, providing a reasonable baseline against which to measure progress. In October 2001, FEMA’s director recognized that assessments of state and local capabilities have to be viewed in terms of the level of preparedness being sought and what measurement should be used for preparedness. The director noted that the federal government should not provide funding without assessing what the funds will accomplish. Moreover, the president’s fiscal year 2003 budget request for $3.5 billion through FEMA for first responders—local police, firefighters, and emergency medical professionals—provides that these funds be accompanied by a process for evaluating the effort to build response capabilities, in order to validate that effort and direct future resources. FEMA has developed an assessment tool that could be used in developing performance and accountability measures for a national strategy. To ensure that states are adequately prepared for a terrorist attack, FEMA was directed by the Senate Committee on Appropriations to assess states’ response capabilities. In response, FEMA developed a self-assessment tool—the Capability Assessment for Readiness (CAR)—that focuses on 13 key emergency management functions, including hazard identification and risk assessment, hazard mitigation, and resource management. However, these key emergency management functions do not specifically address public health issues. In its fiscal year 2001 CAR report, FEMA concluded that states were only marginally capable of responding to a terrorist event involving a weapon of mass destruction. Moreover, the president’s fiscal year 2003 budget proposal acknowledges that our capabilities for responding to a terrorist attack vary widely across the country. Many areas have little or no capability to respond to a terrorist attack that uses weapons of mass destruction. The budget proposal further adds that even the best prepared states and localities do not possess adequate resources to respond to the full range of terrorist threats we face. Proposed standards have been developed for state and local emergency management programs by a consortium of emergency managers from all levels of government and are currently being pilot tested through the Emergency Management Accreditation Program at the state and local levels. Its purpose is to establish minimum acceptable performance criteria by which emergency managers can assess and enhance current programs to mitigate, prepare for, respond to, and recover from disasters and emergencies. For example, one such standard is the requirement that (1) the program must develop the capability to direct, control, and coordinate response and recovery operations, (2) that an incident management system must be utilized, and (3) that organizational roles and responsibilities shall be identified in the emergency operational plans. Although FEMA has experience in working with others in the development of assessment tools, it has had difficulty in measuring program performance. As the president’s fiscal year 2003 budget request acknowledges, FEMA generally performs well in delivering resources to stricken communities and disaster victims quickly. The agency performs less well in its oversight role of ensuring the effective use of such assistance. Further, the agency has not been effective in linking resources to performance information. FEMA’s Office of Inspector General has found that FEMA did not have an ability to measure state disaster risks and performance capability, and it concluded that the agency needed to determine how to measure state and local preparedness programs. Since September 11th, many state and local governments have faced declining revenues and increased security costs. A survey of about 400 cities conducted by the National League of Cities reported that since September 11th, one in three American cities saw their local economies, municipal revenues, and public confidence decline while public-safety spending is up. Further, the National Governors Association estimates fiscal year 2002 state budget shortfalls of between $40 billion and $50 billion, making it increasingly difficult for the states to take on expensive, new homeland security initiatives without federal assistance. State and local revenue shortfalls coupled with increasing demands on resources make it more critical that federal programs be designed carefully to match the priorities and needs of all partners—federal, state, local, and private. Our previous work on federal programs suggests that the choice and design of policy tools have important consequences for performance and accountability. Governments have at their disposal a variety of policy instruments, such as grants, regulations, tax incentives, and regional coordination and partnerships, that they can use to motivate or mandate other levels of government and private-sector entities to take actions to address security concerns. The design of federal policy will play a vital role in determining success and ensuring that scarce federal dollars are used to achieve critical national goals. Key to the national effort will be determining the appropriate level of funding so that policies and tools can be designed and targeted to elicit a prompt, adequate, and sustainable response while also protecting against federal funds being used to substitute for spending that would have occurred anyway. The federal government often uses grants to state and local governments as a means of delivering federal programs. Categorical grants typically permit funds to be used only for specific, narrowly defined purposes. Block grants typically can be used by state and local governments to support a range of activities aimed at achieving a broad national purpose and to provide a great deal of discretion to state and local officials. Either type of grant can be designed to (1) target the funds to states and localities with the greatest need, (2) discourage the replacement of state and local funds with federal funds, commonly referred to as “supplantation,” with a maintenance-of-effort requirement that recipients maintain their level of previous funding, and (3) strike a balance between accountability and flexibility. More specifically: Targeting: The formula for the distribution of any new grant could be based on several considerations, including the state or local government’s capacity to respond to a disaster. This capacity depends on several factors, the most important of which perhaps is the underlying strength of the state’s tax base and whether that base is expanding or is in decline. In an August 2001 report on disaster assistance, we recommended that the director of FEMA consider replacing the per-capita measure of state capability with a more sensitive measure, such as the amount of a state’s total taxable resources, to assess the capabilities of state and local governments to respond to a disaster. Other key considerations include the level of need and the costs of preparedness. Maintenance-of-effort: In our earlier work, we found that substitution is to be expected in any grant and, on average, every additional federal grant dollar results in about 60 cents of supplantion. We found that supplantation is particularly likely for block grants supporting areas with prior state and local involvement. Our recent work on the Temporary Assistance to Needy Families block grant found that a strong maintenance- of-effort provision limits states’ ability to supplant. Recipients can be penalized for not meeting a maintenance-of-effort requirement. Balance accountability and flexibility: Experience with block grants shows that such programs are sustainable if they are accompanied by sufficient information and accountability for national outcomes to enable them to compete for funding in the congressional appropriations process. Accountability can be established for measured results and outcomes that permit greater flexibility in how funds are used while at the same time ensuring some national oversight. Grants previously have been used for enhancing preparedness and recent proposals direct new funding to local governments. In recent discussions, local officials expressed their view that federal grants would be more effective if local officials were allowed more flexibility in the use of funds. They have suggested that some funding should be allocated directly to local governments. They have expressed a preference for block grants, which would distribute funds directly to local governments for a variety of security-related expenses. Recent funding proposals, such as the $3.5 billion block grant for first responders contained in the president’s fiscal year 2003 budget, have included some of these provisions. This matching grant would be administered by FEMA, with 25 percent being distributed to the states based on population. The remainder would go to states for pass-through to local jurisdictions, also on a population basis, but states would be given the discretion to determine the boundaries of substate areas for such a pass-through—that is, a state could pass through the funds to a metropolitan area or to individual local governments within such an area. Although the state and local jurisdictions would have discretion to tailor the assistance to meet local needs, it is anticipated that more than one- third of the funds would be used to improve communications; an additional one-third would be used to equip state and local first responders, and the remainder would be used for training, planning, technical assistance, and administration. Federal, state, and local governments share authority for setting standards through regulations in several areas, including infrastructure and programs vital to preparedness (for example, transportation systems, water systems, public health). In designing regulations, key considerations include how to provide federal protections, guarantees, or benefits while preserving an appropriate balance between federal and state and local authorities and between the public and private sectors (for example, for chemical and nuclear facilities). In designing a regulatory approach, the challenges include determining who will set the standards and who will implement or enforce them. Five models of shared regulatory authority are: fixed federal standards that preempt all state regulatory action in the subject area covered; federal minimum standards that preempt less stringent state laws but permit states to establish standards that are more stringent than the federal; inclusion of federal regulatory provisions not established through preemption in grants or other forms of assistance that states may choose to accept; cooperative programs in which voluntary national standards are formulated by federal and state officials working together; and widespread state adoption of voluntary standards formulated by quasi- official entities. Any one of these shared regulatory approaches could be used in designing standards for preparedness. The first two of these mechanisms involve federal preemption. The other three represent alternatives to preemption. Each mechanism offers different advantages and limitations that reflect some of the key considerations in the federal-state balance. To the extent that private entities will be called upon to improve security over dangerous materials or to protect vital assets, the federal government can use tax incentives to encourage and enforce their activities. Tax incentives are the result of special exclusions, exemptions, deductions, credits, deferrals, or tax rates in the federal tax laws. Unlike grants, tax incentives do not generally permit the same degree of federal oversight and targeting, and they are generally available by formula to all potential beneficiaries who satisfy congressionally established criteria. Promoting partnerships between critical actors (including different levels of government and the private sector) facilitates the maximizing of resources and also supports coordination on a regional level. Partnerships could encompass federal, state, and local governments working together to share information, develop communications technology, and provide mutual aid. The federal government may be able to offer state and local governments assistance in certain areas, such as risk management and intelligence sharing. In turn, state and local governments have much to offer in terms of knowledge of local vulnerabilities and resources, such as local law enforcement personnel, available to respond to threats and emergencies in their communities. The importance of readily available urban search and rescue was highlighted in the Loma Prieta earthquake in October 1989 that collapsed the Cypress section of the Nimitz Freeway in Oakland and structures in San Francisco and Santa Cruz. In late 1989, the Governor’s Office of Emergency Services developed a proposal to enhance urban search and rescue capabilities in California, and the cornerstone of this proposal was the development of multidiscipline urban search and rescue task forces to be deployed in the event of large-scale disasters. A parallel effort was undertaken by FEMA at that time to upgrade urban search and rescue efforts nationwide. FEMA’s national urban search and rescue response teams provide a framework for structuring local emergency personnel into integrated disaster response task forces. FEMA has 28 urban search and rescue teams, with 8 of those teams positioned in California. Twenty of FEMA’s 28 teams were deployed to New York in the aftermath of the tragedy, and five teams were deployed to Washington to help in search and rescue efforts at the Pentagon. Since the events of September 11th, a task force of mayors and police chiefs has called for a new protocol governing how local law enforcement agencies can assist federal agencies, particularly the FBI, given the information needed to do so. As the United States Conference of Mayors noted, a close working partnership of local and federal law enforcement agencies, which includes the sharing of intelligence, will expand and strengthen the nation’s overall ability to prevent and respond to domestic terrorism. The USA Patriot Act provides for greater sharing of intelligence among federal agencies. An expansion of this act has been proposed (S.1615, H.R. 3285) that would provide for information sharing among federal, state, and local law enforcement agencies. In addition, the Intergovernmental Law Enforcement Information Sharing Act of 2001 (H.R. 3483), which you sponsored Mr. Chairman, addresses a number of information-sharing needs. For instance, this proposed legislation provides that the United States Attorney General expeditiously grant security clearances to governors who apply for them, and state and local officials who participate in federal counterterrorism working groups or regional terrorism task forces. Local officials have emphasized the importance of regional coordination. Regional resources, such as equipment and expertise, are essential because of proximity, which allows for quick deployment, and experience in working within the region. Large-scale or labor-intensive incidents quickly deplete a given locality’s supply of trained responders. Some cities have spread training and equipment to neighboring municipal areas so that their mutual aid partners can help. These partnerships afford economies of scale across a region. In events that require a quick response, such as a chemical attack, regional agreements take on greater importance because many local officials do not think that federal and state resources can arrive in sufficient time to help. Mutual aid agreements provide a structure for assistance and for sharing resources among jurisdictions in response to an emergency. Because individual jurisdictions may not have all the resources they need to respond to all types of emergencies, these agreements allow for resources to be deployed quickly within a region. The terms of mutual aid agreements vary for different services and different localities. These agreements may provide for the state to share services, personnel, supplies, and equipment with counties, towns, and municipalities within the state, with neighboring states, or, in the case of states bordering Canada, with jurisdictions in another country. Some of the agreements also provide for cooperative planning, training, and exercises in preparation for emergencies. Some of these agreements involve private companies and local military bases, as well as local government entities. Such agreements were in place for the three sites that were involved on September 11th— New York City, the Pentagon, and a rural area of Pennsylvania—and provide examples of some of the benefits of mutual aid agreements and of coordination within a region. With regard to regional planning and coordination, there may be federal programs that could provide models for funding proposals. In the 1962 Federal-Aid Highway Act, the federal government established a comprehensive cooperative process for transportation planning. This model of regional planning continues today under the Transportation Equity Act for the 21st century (TEA-21, originally ISTEA) program. This model emphasizes the role of state and local officials in developing a plan to meet regional transportation needs. Metropolitan Planning Organizations (MPOs) coordinate the regional planning process and adopt a plan, which is then approved by the state. Mr. Chairman, in conclusion, as increasing demands are placed on budgets at all levels of government, it will be necessary to make sound choices to maintain fiscal stability. All levels of government and the private sector will have to communicate and cooperate effectively with each other across a broad range of issues to develop a national strategy to better target available resources to address the urgent national preparedness needs. Involving all levels of government and the private sector in developing key aspects of a national strategy that I have discussed today—a definition and clarification of the appropriate roles and responsibilities, an establishment of goals and performance measures, and a selection of appropriate tools— is essential to the successful formulation of the national preparedness strategy and ultimately to preparing and defending our nation from terrorist attacks. This completes my prepared statement. I would be pleased to respond to any questions you or other members of the subcommittee may have. For further information about this testimony, please contact me at (202) 512-6737, Paul Posner at (202) 512-9573, or JayEtta Hecker at (202) 512- 2834. Other key contributors to this testimony include Jack Burriesci, Matthew Ebert, Colin J. Fallon, Thomas James, Kristen Sullivan Massey, Yvonne Pufahl, Jack Schulze, and Amelia Shachoy. Homeland Security: Challenges and Strategies in Addressing Short- and Long-Term National Needs. GAO-02-160T. Washington, D.C.: November 7, 2001. Homeland Security: A Risk Management Approach Can Guide Preparedness Efforts. GAO-02-208T. Washington, D.C.: October 31, 2001. Homeland Security: Need to Consider VA’s Role in Strengthening Federal Preparedness. GAO-02-145T. Washington, D.C.: October 15, 2001. Homeland Security: Key Elements of a Risk Management Approach. GAO-02-150T. Washington, D.C.: October 12, 2001. Homeland Security: A Framework for Addressing the Nation’s Issues. GAO-01-1158T. Washington, D.C.: September 21, 2001. Combating Terrorism: Considerations for Investing Resources in Chemical and Biological Preparedness. GAO-01-162T. Washington, D.C.: October 17, 2001. Combating Terrorism: Selected Challenges and Related Recommendations. GAO-01-822. Washington, D.C.: September 20, 2001. Combating Terrorism: Actions Needed to Improve DOD’s Antiterrorism Program Implementation and Management. GAO-01-909. Washington, D.C.: September 19, 2001. Combating Terrorism: Comments on H.R. 525 to Create a President’s Council on Domestic Preparedness. GAO-01-555T. Washington, D.C.: May 9, 2001. Combating Terrorism: Observations on Options to Improve the Federal Response. GAO-01-660T. Washington, D.C.: April 24, 2001. Combating Terrorism: Comments on Counterterrorism Leadership and National Strategy. GAO-01-556T. Washington, D.C.: March 27, 2001. Combating Terrorism: FEMA Continues to Make Progress in Coordinating Preparedness and Response. GAO-01-15. Washington, D.C.: March 20, 2001. Combating Terrorism: Federal Response Teams Provide Varied Capabilities; Opportunities Remain to Improve Coordination. GAO-01- 14. Washington, D.C.: November 30, 2000. Combating Terrorism: Need to Eliminate Duplicate Federal Weapons of Mass Destruction Training. GAO/NSIAD-00-64. Washington, D.C.: March 21, 2000. Combating Terrorism: Observations on the Threat of Chemical and Biological Terrorism. GAO/T-NSIAD-00-50. Washington, D.C.: October 20, 1999. Combating Terrorism: Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attack. GAO/NSIAD-99-163. Washington, D.C.: September 7, 1999. Combating Terrorism: Observations on Growth in Federal Programs. GAO/T-NSIAD-99-181. Washington, D.C.: June 9, 1999. Combating Terrorism: Analysis of Potential Emergency Response Equipment and Sustainment Costs. GAO-NSIAD-99-151. Washington, D.C.: June 9, 1999. Combating Terrorism: Use of National Guard Response Teams Is Unclear. GAO/NSIAD-99-110. Washington, D.C.: May 21, 1999. Combating Terrorism: Observations on Federal Spending to Combat Terrorism. GAO/T-NSIAD/GGD-99-107. Washington, D.C.: March 11, 1999. Combating Terrorism: Opportunities to Improve Domestic Preparedness Program Focus and Efficiency. GAO-NSIAD-99-3. Washington, D.C.: November 12, 1998. Combating Terrorism: Observations on the Nunn-Lugar-Domenici Domestic Preparedness Program. GAO/T-NSIAD-99-16. Washington, D.C.: October 2, 1998. Combating Terrorism: Threat and Risk Assessments Can Help Prioritize and Target Program Investments. GAO/NSIAD-98-74. Washington, D.C.: April 9, 1998. Combating Terrorism: Spending on Governmentwide Programs Requires Better Management and Coordination. GAO/NSIAD-98-39. Washington, D.C.: December 1, 1997. Bioterrorism: The Centers for Disease Control and Prevention’s Role in Public Health Protection. GAO-02-235T. Washington, D.C.: November 15, 2001. Bioterrorism: Review of Public Health and Medical Preparedness. GAO- 02-149T. Washington, D.C.: October 10, 2001. Bioterrorism: Public Health and Medical Preparedness. GAO-02-141T. Washington, D.C.: October 10, 2001. Bioterrorism: Coordination and Preparedness. GAO-02-129T. Washington, D.C.: October 5, 2001. Bioterrorism: Federal Research and Preparedness Activities. GAO-01- 915. Washington, D.C.: September 28, 2001. Chemical and Biological Defense: Improved Risk Assessments and Inventory Management Are Needed. GAO-01-667. Washington, D.C.: September 28, 2001. West Nile Virus Outbreak: Lessons for Public Health Preparedness. GAO/HEHS-00-180. Washington, D.C.: September 11, 2000. Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attacks. GAO/NSIAD-99-163. Washington, D.C.: September 7, 1999. Chemical and Biological Defense: Program Planning and Evaluation Should Follow Results Act Framework. GAO/NSIAD-99-159. Washington, D.C.: August 16, 1999. Combating Terrorism: Observations on Biological Terrorism and Public Health Initiatives. GAO/T-NSIAD-99-112. Washington, D.C.: March 16, 1999.
Federal, state, and local governments share responsibility for terrorist attacks. However, local government, including police and fire departments, emergency medical personnel, and public health agencies, is typically the first responder to an incident. The federal government historically has provided leadership, training, and funding assistance. In the aftermath of September 11, for instance, one-quarter of the $40 billion Emergency Response Fund was earmarked for homeland security, including enhancing state and local government preparedness. Because the national security threat is diffuse and the challenge is highly intergovernmental, national policymakers must formulate strategies with a firm understanding of the interests, capacity, and challenges facing those governments. The development of a national strategy will improve national preparedness and enhance partnerships between federal, state, and local governments. The creation of the Office of Homeland Security is an important and potentially significant first step. The Office of Homeland Security's strategic plan should (1) define and clarify the appropriate roles and responsibilities of federal, state, and local entities; (2) establish goals and performance measures to guide the nation's preparedness efforts; and (3) carefully choose the most appropriate tools of government to implement the national strategy and achieve national goals.
DODIG has taken a number of actions to improve its tracking of the timeliness of military whistleblower reprisal investigations, including developing an automated tool to address statutory notification requirements. However, DODIG does not regularly report to Congress on the timeliness of military whistleblower reprisal investigations. In both 2012 and 2015, we found that DOD was not meeting its internal timeliness requirements for completing military whistleblower reprisal investigations within 180 days. Specifically, in 2012 we found that despite undertaking efforts to improve timeliness—such as changing its process for taking in complaints—DOD took a mean of 451 days to process cases, and that its efforts to improve case processing times were hindered by unreliable and incomplete data on timeliness. Further, in 2015 we found that DOD’s average investigation time for cases closed in fiscal years 2013 and 2014 was 526 days, almost three times DOD’s internal completion requirement of 180 days. DOD Directive 7050.06, which implements 10 U.S.C. § 1034 and establishes DOD policy, states that DODIG shall issue a whistleblower reprisal investigation report within 180 days of the receipt of the allegation of reprisal. To improve the timeliness of military whistleblower reprisal investigations, we recommended in February 2012 that DOD (1) implement procedures to track and report data on its case processing timeliness and (2) track and analyze timeliness data to identify reforms that could aid in processing cases within 180-day time frame. DOD concurred and subsequently took several actions to implement these recommendations. For example, in December 2012 DODIG began implementing a case management system to collect key dates to track the timeliness of DODIG’s investigative phases and in March 2016 issued a case management system guide that established procedures to help ensure accurate and complete recording and consistent tracking of case processing time. Further, DODIG took steps to track and analyze timeliness data that could aid in processing cases within the 180-day timeframe by compiling quarterly timeliness metrics starting in fiscal year 2014, and by updating its case management system in April 2016 to include additional investigation milestones. Because some of these actions were not taken until 2016, it is too early to determine whether timeliness has improved since we last reported on the status. In both our 2012 and 2015 reports, we found that DOD generally did not meet statutory requirements for notifying servicemembers within 180 days about delays in investigations. According to 10 U.S.C. § 1034 if, during the course of an investigation, an IG determines that it is not possible to submit the report of investigation to the Secretary of Defense and the service Secretary within 180 days after the receipt of the allegation, the IG shall provide to the Secretary of Defense, the service Secretary concerned, and the servicemember making the allegation a notice of that determination including the reasons why the report may not be submitted within that time and an estimate of the date when the report will be submitted. In 2012, we found that neither the DODIG nor military service IGs had been making the required notifications. During that review, DODIG changed its practice and started reporting this information in October 2011 and identified steps in an action plan to help ensure that it and the military service IGs followed the statutory reporting requirements. During our 2015 review, DODIG officials stated that they had taken additional steps to help ensure they met the statutory notification requirement. For example, DODIG assigned an oversight investigator to remind the service IGs to send the required letters and developed a mechanism in DODIG’s case management system to indicate which cases were older than 180 days. However, during our 2015 review, we again found that DOD had not sent the required letters to notify servicemembers about delays in their investigations in about half of reprisal investigations closed in fiscal year 2013; that the median notification time for servicemembers receiving the required letter was about 353 days after the servicemember filed the complaint; and that the letters that DOD had sent, on average, had significantly underestimated the date by which the investigation would be competed. Consequently, we recommended in our 2015 report that DOD develop an automated tool to help ensure compliance with the statutory 180-day notification requirement by providing servicemembers with accurate information regarding the status of their reprisal investigations within 180 days of receipt of an allegation of reprisal. DOD concurred with this recommendation and in April 2016, launched an automated tool within its case management system to help ensure compliance with the statutory 180-day notification requirement, instead of relying on its manual reconciliation process. Specifically, the case management system now has an alert that provides the age of the case and the date by which the notification letter must be transmitted to the required parties. This tool is to help provide assurance that servicemembers are being notified of the status of their reprisal investigations. In 2012, we found that although DODIG is required to keep Congress fully and currently informed through, among other things, its semiannual reports to Congress, DODIG was not including in these reports information on military whistleblower case processing time, including (1) statutorily required notifications of delays in the investigations or (2) those exceeding DODIG’s internal 180-day completion requirement. The semiannual report to Congress is required to include information on fraud, abuses, and deficiencies related to the administration of programs and operations managed or financed by DOD, but DOD interpreted this requirement as not applying to the military whistleblower reprisal program. Because Congress is the primary oversight body for DODIG, we recommended that DOD regularly report to Congress on the timeliness of military whistleblower reprisal investigations, including those exceeding the 180-day timeframe. DOD concurred with our recommendation. On August 31, 2016, the DOD Principal Deputy Inspector General performing the duties of the DOD Inspector General stated that the office will implement this recommendation by regularly reporting timeliness information to Congress on a biannual basis. We believe that if this action is taken, it will fully implement our recommendation, provide Congress with enhanced visibility over the status of military whistleblower reprisal investigations, and thereby improve decisionmakers’ ability to effectively oversee the military whistleblower reprisal program. In 2012 and 2015, we found that DODIG’s oversight of military whistleblower reprisal investigations conducted by the military services was hampered by insufficient processes, including performance metrics; guidance; and plans. DOD subsequently took steps to strengthen its oversight of military whistleblower reprisal investigations conducted by the military services by establishing processes and developing guidance for overseeing these investigations—along with a plan to expand its case management system to the services. In 2012, we found that DODIG lacked reliable data on the corrective actions taken in response to substantiated whistleblower reprisal cases, thus limiting the visibility and oversight DOD and Congress have of the final portion of the military whistleblower reprisal process. DOD Directive 7050.06 directs the Secretaries of the military departments and the heads of the other DOD components to take corrective action based on IG reports of investigations of military whistleblower reprisal allegations and to notify DODIG of the actions taken within 10 working days. Further, DODIG requires that the service IGs report back to DODIG on command actions taken against the individual alleged to have reprised against a whistleblower, according to officials from these organizations. However, in 2012 we found that DODIG had not been maintaining reliable information on command actions needed to oversee this process. Specifically, for 40 percent of all substantiated cases that DODIG closed from October 1, 2005, through March 31, 2011, the database that DODIG used during that period did not contain information on the command actions taken. As a result, we recommended in our 2012 report that DOD (1) establish standardized corrective action reporting requirements, and (2) consistently track and regularly reconcile data regarding corrective actions. DOD addressed these recommendations by issuing an update to its military whistleblower directive in April 2015 that required standardized corrective action reporting requirements by the services. DODIG also issued additional guidance in its March 2016 investigations manual requiring that investigators populate data fields for corrective actions and remedies. Finally, DODIG provided us with a report in April 2016 detailing its tracking of corrective actions taken in response to substantiated reprisal cases between October 2011 and January 2016. In 2012, we also found that DODIG had not yet fully established performance metrics for ensuring the timeliness and quality of whistleblower reprisal investigations but was taking steps to establish timeliness metrics that focused on investigation processing time. Federal internal control standards state that metrics are important for identifying and setting appropriate incentives for achieving goals while complying with law, regulations, and ethical standards. Further, we found in our previous work that metrics on both timeliness and quality—such as completeness of investigative reports and the adequacy of internal controls—can enhance the ability of organizations to provide assurance that they are exercising all of the appropriate safeguards for federal programs. During our 2012 review, DODIG officials stated that they recognized the importance of both timeliness and quality metrics and that they planned to develop quality metrics as part of their effort to improve case management and outcomes. They further noted that quality metrics could include measuring whether interviews are completed and documented and whether conclusions made about the case are fully supported by evidence. To assist DOD in improving oversight of the whistleblower reprisal program, we recommended in our 2012 report that DOD develop and implement performance metrics to ensure the quality and effectiveness of the investigative process, such as ensuring that the casefiles contain evidence sufficient to support the conclusions. DOD concurred with our recommendation and in 2014 fully developed timeliness metrics, along with some performance metrics to assess the completeness of a sample of (1) DODIG-conducted whistleblower reprisal investigations and (2) DODIG oversight reviews of the military services whistleblower reprisal investigations. For example, now DODIG is to complete internal control checklists for investigations it conducts and oversight worksheets for investigations conducted by the military services to determine whether casefiles are compliant with internal policy and best practices. On a quarterly basis, DODIG is to draw a sample of the checklists and oversight worksheets for cases closed by DODIG and the military service IGs and compare these checklists to the quality metrics that it developed. According to DODIG officials, these metrics were briefed to the DOD Inspector General in fiscal year 2014. DODIG officials stated in July 2016 that they continued to conduct quality assurance reviews and collect associated metrics in fiscal year 2015, but that they have not briefed these metrics to the DOD Inspector General since fiscal year 2014 and that changes to the metrics briefings are forthcoming per direction from the DOD Inspector General and Principal Deputy Inspector General. DODIG did not provide information on the nature of these changes. While we believe that DODIG’s actions should help oversee the quality of investigations, we will continue to work with the DODIG and monitor its progress in implementing and communicating these performance metrics during our ongoing review assessing whistleblower reprisal investigation processes for DOD civilian employees and contractors. Further, we also believe that until the military services follow standardized investigation stages, as discussed later in this statement, it will be difficult for the DODIG to consistently measure the quality of the services’ military whistleblower reprisal investigations. Separately, in 2015, we found that DODIG and the service IGs had processes for investigators to recuse themselves from investigations, but there was no process for investigators to document whether the investigation they conducted was independent and outside the chain of command. Council of the Inspectors General on Integrity and Efficiency standards state that in all matters relating to investigative work, the investigative organization must be free, both in fact and appearance, from impairments to independence. Further, guidance for documenting independence is included in generally accepted government auditing standards, which can provide guidance to service IGs as a best practice on how to document decisions regarding independence when conducting reprisal investigations. At the time of our 2015 review, DODIG officials stated that their recusal policies for investigators, their decentralized investigation structure, and their removal of the investigator from the chain of command adequately addressed independence issues and that no further documentation of independence was needed. However, during the case file review we conducted for our 2015 report, we identified oversight worksheets on which DODIG oversight investigators had noted potential impairments to investigator objectivity in the report of investigation. For example, one oversight worksheet stated that the report gave the appearance of service investigator bias, and another oversight worksheet stated that the investigator was not outside the chain of command, as is statutorily required. DODIG approved these cases without documenting how it had reconciled these case deficiencies. As a result, in our 2015 report we recommended that DOD develop and implement a process for military service investigators to document whether the investigation was independent and outside the chain of command and direct the service IGs to provide such documentation for review during the oversight process. DOD concurred with this recommendation and issued a memorandum in June 2015 that informed service IGs that DODIG would look for certification of an investigator’s independence during its oversight reviews. Concurrently, DODIG also directed the service IGs to provide such documentation. In 2012, we found that DODIG was updating its guidance related to the whistleblower program but that the updates had not yet been formalized and that the guidance that existed at that time was inconsistently followed. According to the Council of the Inspectors General on Integrity and Efficiency’s quality standards for investigations, organizations should establish appropriate written investigative policies and procedures through handbooks, manuals, directives, or similar mechanisms to facilitate due professional care in meeting program requirements. Further, guidance should be regularly evaluated to help ensure that it is still appropriate and working as intended. However, in 2012 we found, among other things, that DODIG’s primary investigative guide distributed to investigators conducting whistleblower reprisal investigations had not been updated since 1996 and did not reflect some investigative processes that were current in 2012. Additionally, because guidance related to key provisions of the investigative process was unclear, it was being interpreted and implemented differently by the service IGs. As a result, we recommended in our 2012 report that DODIG update its whistleblower reprisal investigative guidance and ensure that it is consistently followed, including clarifying reporting requirements, responsibilities, and terminology. DOD concurred with this recommendation and in October 2014 released a guide of best practices for conducting military reprisal investigations and in April 2015 updated Directive 7050.06 on military whistleblower protection, which established policies and assigned responsibilities for military whistleblower protection and defined key terminology. Separately, in 2015 we found that DODIG had provided limited guidance to users of its case management system on how to populate case information into the system. The case management system, in use since December 2012, was to serve as a real-time complaint tracking and investigative management tool for investigators. DOD’s fiscal year 2014 performance plan for oversight investigators notes that investigators should ensure that the case management system reflects current, real- time information on case activity. This intent aligns with Council of the Inspectors General on Integrity and Efficiency’s quality standards for investigations, which state that accurate processing of information is essential to the mission of an investigative organization and that this begins with the orderly, systematic, accurate, and secure maintenance of a management information system. However, based on our file review of a sample of 124 cases closed in fiscal year 2013, we found that DODIG investigators were not using the case management system for real-time case management. Specifically, we estimated that DODIG personnel uploaded key case documents to the system after DODIG had closed the case in 77 percent of cases in fiscal year 2013. Among other things, these documents included reports of investigation, oversight worksheets, and 180-day notification letters regarding delays in completing investigations. Additionally, we estimated that for 83 percent of cases closed in fiscal year 2013, DODIG staff had made changes to case variables in the case management system at least 3 months after case closure. DODIG officials stated in 2015 that they planned to further develop a manual for the case management system that was in draft form along with internal desk aides, but that they did not plan to issue additional internal guidance for DODIG staff on the case management system because they believed that the existing guidance was sufficient. However, DODIG’s draft manual did not instruct users on how to access the system, troubleshoot errors, or monitor caseloads. As a result, in our 2015 report we recommended that DOD issue additional guidance to investigators on how to use the case management system as a real-time management tool. DOD concurred with this recommendation and in March 2016 issued a case management system user guide and in July 2016, a data entry guide. Collectively, these guides provide users with key information on how to work with and maintain data in the case management system. In 2015, we found that each military service IG conducted and monitored the status of military whistleblower reprisal investigations in a different case management system and that DODIG did not have complete visibility over service investigations from complaint receipt to investigation determination. Further, we found that DODIG did not have knowledge of the real-time status of service-conducted investigations and was unable to anticipate when service IGs would send completed reports of investigation for DODIG review. DODIG is required to review all service IG determinations in military reprisal investigations in addition to its responsibility for conducting investigations of some military reprisal complaints, and DOD Directive 7050.06 requires that service IGs notify DODIG of reprisal complaints within 10 days of the receipt of a complaint. However, our analysis indicated that DODIG’s case management system did not have records of at least 22 percent of service investigations both open as of September 30, 2014, and closed in fiscal years 2013 and 2014. Further, based on our file review, we estimated that there was no evidence of the required service notification in 30 percent of the cases closed in fiscal year 2013. We concluded that without a common system to share data, DODIG’s oversight of the timeliness of service investigations and visibility of its own future workload was limited. At the time of our 2015 review, DOD was taking steps to improve its visibility into service investigations, including by expanding its case management system to the military services. DODIG officials stated that they had created a working group comprising representatives from each of the service IGs to facilitate the expansion and that they planned a complete rollout to the service IGs by the end of fiscal year 2016. However, DODIG did not have an implementation plan for the expansion and had not yet taken steps to develop one. Project management plans should include a scope—to describe major deliverables, assumptions, and project constraints—project requirements, schedules, costs, and stakeholder roles and responsibilities and communication techniques, among other things. Given DOD’s stated plans to expand the case management system to the service IGs by the end of fiscal year 2016, we recommended in our 2015 report that DOD develop an implementation plan that addresses the needs of DODIG and the service IGs and defines project goals, schedules, costs, stakeholder roles and responsibilities, and stakeholder communication techniques. DOD concurred with this recommendation and subsequently developed a plan in April 2016, in coordination with the military services, which included the elements we recommended for a plan to expand its case management system into an enterprise system. This plan states that the enterprise case management system will launch between February 2018 and May 2018 and notes that the project budget between fiscal years 2017 and 2021 is approximately $25.3 million. Although DODIG has taken several important actions, additional actions are still needed to further strengthen the capacity of DODIG and the Congress to oversee military whistleblower reprisal investigations. These actions include standardizing the investigation process and reporting corrective action information to Congress. In 2015, we found that the DODIG and the military service IGs use different terms in their guidance to refer to their investigations, thus hindering DODIG’s ability to consistently classify and assess the completeness of cases during its oversight reviews. For example, we found that in the absence of standardized investigation stages, DODIG investigators had miscoded approximately 43 percent of the cases that DODIG had closed in fiscal year 2013 as full investigations, based on our estimate, when these investigations were instead preliminary inquiries as indicated in the services’ reports of investigation. The Council of the Inspectors General on Integrity and Efficiency’s quality standards for investigations state that to facilitate due professional care, organizations should establish written investigative policies and procedures that are revised regularly according to evolving laws, regulations, and executive orders. DODIG took an important step to improve its guidance by issuing an updated reprisal investigation guide for military reprisal investigations for both DODIG and service IG investigators in October 2014. However, the guide states that it describes best practices for conducting military reprisal intakes and investigations and DODIG officials told us that the guide does not explicitly direct the services to follow DODIG’s preferred investigation process and stages. These officials further stated that they have no role in the development of service IG regulations. To improve the military whistleblower reprisal investigation process and oversight of such investigations, in our 2015 report we recommended that the Secretary of Defense in coordination with the DODIG, direct the military services to follow standardized investigation stages and issue guidance clarifying how the stages are defined. DOD concurred with this recommendation and subsequently updated its guide in June 2015. However, this guide is still characterized as describing best practices and does not direct the services to follow standardized investigation stages. We note that 10 U.S.C. § 1034 provides the authority for the Secretary of Defense to prescribe regulations to carry out the section. Also, DOD Directive 7050.06 assigns DODIG the responsibility to provide oversight of the military whistleblower reprisal program for the department. DODIG officials noted in August 2016 that they are currently working with the military services through an established working group to standardize the investigation stages as an interim measure. The DOD Principal Deputy Inspector General performing the duties of the DOD Inspector General also indicated in August 2016 that the office is willing to coordinate with the Secretary of Defense to issue authoritative direction to the services to standardize the investigation stages, but that this will take time. As previously mentioned, we found in 2012 that DOD lacked reliable data on the corrective actions taken in response to substantiated whistleblower reprisal cases, thus limiting the visibility and oversight that DOD and Congress have of the final portion of the military whistleblower reprisal process. We also noted in 2012 that a 2009 Department of Justice review recommended that the results of investigations that substantiate allegations of reprisal be publicized as a way to heighten awareness within the services of the Military Whistleblower Protection Act, to potentially deter future incidents of reprisal, and to possibly encourage other reprisal victims to come forward. While the DODIG cannot directly take corrective action in response to a substantiated case per DOD Directive 7050.06, it is the focal point for DOD’s military whistleblower reprisal program and is well positioned to collect and monitor data regarding program outcomes. Further, DODIG officials stated in 2012 that because DODIG is the focal point, it is important for it to have visibility and information of all military whistleblower reprisal activities, not only to provide oversight but also to provide a central place within the department where internal and external stakeholders can obtain information. In addition to the recommendations we made regarding establishing corrective action reporting requirements and regularly tracking these data, we also recommended in our 2012 report that DOD regularly report to Congress on the frequency and type of corrective actions taken in response to substantiated reprisal claims. We noted that DOD could do so, for example, through its semiannual reports to Congress. DOD concurred with that recommendation and has since included examples in its semiannual reports to Congress of corrective actions taken by the military services for substantiated cases but not a comprehensive list of all corrective actions taken. However, in following up on actions that DODIG has taken regarding this recommendation in August 2016, DODIG officials stated that the corrective actions listed in its semiannual reports to Congress included all corrective actions taken during the 6 month reporting period, but that the reports incorrectly identified these actions as examples. DODIG provided us corrective action information to compare with the corrective actions reported in DODIG’s December 2015 and March 2016 semiannual reports to Congress for those reporting periods. We identified some key differences. Specifically, we identified corrective actions in the information provided to us by DODIG that were not published in the December and March reports to Congress and identified discrepancies in the types of corrective action contained in the reports and in the information that DODIG provided. As a result, we believe that DODIG’s two most recent semiannual reports to Congress did not include the frequency and type of all corrective actions reported during those reporting periods. Relatedly, we also noted in August 2016 that DODIG’s semiannual reports did not include other information needed to convey the frequency and type of corrective actions. Specifically, DODIG officials stated in August 2016 that their case management system would require additional capability in order to produce a list of substantiated allegations that do not have associated corrective actions, which would indicate which corrective action recommendations are outstanding. Further, these officials stated that publishing information showing the status of all DODIG corrective action recommendations—not just actions that were taken during a particular reporting period—could be misleading because the military services sometimes take actions that are different than those recommended by DODIG and that may not result from reprisal investigations. However, as noted in the 2009 Department of Justice review, publicizing the results of investigations that substantiate allegations of reprisal may help to deter future incidents of reprisal and encourage other whistleblowers to come forward. Without including information on (1) all corrective actions taken during a reporting period, (2) outstanding corrective action recommendations, and (3) actions taken by the services that are different than those recommended by DODIG, we believe that DODIG’s current method of reporting does not fully address our recommendation to report to Congress on the frequency and type of corrective action taken in response to substantiated claims. Moreover, it does not meet the requirement to keep Congress fully and currently informed on the progress of implementing corrective actions through, among other things, its semiannual reports to Congress. We therefore continue to believe that without such information, Congress will be hindered in its ability to provide oversight of the corrective action portion of the military whistleblower reprisal program. In summary, DOD has taken actions to implement 15 of the 18 recommendations that we made to address the military whistleblower reprisal timeliness and oversight challenges we identified in our 2012 and 2015 reports. These efforts constitute progress toward improving the DODIG’s ability to accurately track the timeliness of military whistleblower reprisal investigations and increase the DODIG’s ability to effectively oversee the department’s military whistleblower reprisal program. Fully implementing the remaining 3 recommendations would further strengthen DODIG’s capacity to assess the quality of military whistleblower reprisal investigations and enhance Congress’ visibility into the timeliness of investigations as well as into the corrective actions taken for substantiated allegations. We have ongoing work that will help to both monitor the actions taken by DODIG to improve its oversight of military reprisal investigations and provide additional insight on the DODIG’s ability to conduct timely and quality reprisal investigations for DOD’s civilian and contractor employees. Chairman DeSantis, Ranking Member Lynch, and Members of the Subcommittee, this concludes my prepared statement. I look forward to answering any questions that you might have. If you or your staff have any questions about this statement, please contact Brenda S. Farrell, Director, Defense Capabilities and Management at (202) 512-3604 or [email protected], or Lori Atkinson, Assistant Director, Defense Capabilities and Management at (404) 679- 1852 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Tracy Barnes, Sara Cradic, Ryan D’Amore, Taylor Hadfield, and Mike Silver. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Whistleblowers play an important role in safeguarding the federal government against waste, fraud, and abuse, and their willingness to come forward can contribute to improvements in government operations. However, whistleblowers also risk reprisal, such as demotion, reassignment, and firing. This testimony discusses DODIG's progress in (1) taking actions to track and report on the timeliness of military whistleblower reprisal investigations, and (2) strengthening its oversight of the military services' whistleblower reprisal investigations. GAO's statement is based primarily on information from May 2015 and February 2012 GAO reports on military whistleblower reprisal investigations. For those reports, GAO examined laws, regulations, and DOD guidance; conducted detailed file reviews using representative samples of cases closed in fiscal year 2013 and between January 2009 and March 2011; analyzed DODIG and military service data for cases closed in fiscal years 2013 and 2014; and interviewed DOD officials. GAO also determined what actions DOD had taken through August 2016 in response to recommendations made in the 2015 and 2012 reports. The Department of Defense Office of Inspector General (DODIG) has taken actions to improve its tracking of the timeliness of military whistleblower reprisal investigations in response to recommendations that GAO made in 2012 and 2015. For example, in 2012 and 2015, GAO found that DOD was not meeting its internal requirement to complete whistleblower reprisal investigations within 180 days, with cases closed in fiscal years 2013 and 2014 averaging 526 days. In response, DODIG—which is responsible for both conducting investigations and overseeing investigations conducted by the military services—took steps to better track and analyze timeliness data by developing a guide to help ensure the accurate tracking of case processing time and by updating its case management system in April 2016 to include new investigation milestones. Because these actions were not taken until 2016, it is too early to determine if timeliness has improved since GAO last reported on the status. Similarly, in 2015, GAO found that DOD had not met the statutory requirement to notify servicemembers within 180-days about delays in their investigations for about half of the reprisal investigations closed in fiscal year 2013. In response, DODIG developed an automated tool in its case management system to flag cases approaching 180 days. However, DODIG continues to not regularly report to Congress on the timeliness of military whistleblower reprisal investigations as GAO recommended in 2012. On August 31, 2016, a senior DODIG official stated that DODIG will implement this recommendation by reporting timeliness information to Congress biannually. DODIG has strengthened its oversight of military service reprisal investigations in response to recommendations GAO made in 2012 and 2015 by establishing processes and developing guidance for overseeing investigations, among other things. For example, in 2015, GAO found that DODIG did not have a process for documenting whether investigations were independent and were conducted by someone outside the military service chain of command. In response, DODIG directed the service IGs to certify investigators' independence for oversight reviews. GAO also found in 2015 that DODIG had provided limited guidance to investigators using its case management system, limiting its utility as a real-time management system, as intended. In response, DODIG issued a system guide and a data entry guide, which provide key information on how to work with and maintain system data. However, in 2015 GAO also found that DODIG and the military service IGs used different terms in their guidance to investigators, hindering DODIG oversight of case completeness. GAO recommended that DOD direct the military service IGs to follow standardized investigation stages and issue related guidance. DODIG officials stated in August 2016 that they are working with the services to standardize investigation stages and that DODIG is willing to work with the Secretary of Defense to issue such direction. Separately, GAO found in 2012 that unreliable data on corrective actions taken in response to substantiated reprisal cases was hampering oversight and recommended that DOD regularly report to Congress on the frequency and type of corrective actions taken in response to substantiated reprisal claims. DODIG reports some corrective actions in its semiannual report to Congress, but does not include all relevant corrective actions or outstanding corrective action recommendations. DOD implemented 15 of the 18 recommendations GAO made to improve and track investigation timeliness and strengthen oversight of the military services' investigations, and is considering steps to implement the remaining three regarding standardized investigations and reporting to Congress.
The Congress established VETS in 1980 to carry out the national policy that veterans receive priority employment and training opportunities. Faced with growing long-term challenges of new service delivery systems, an evolving labor market, and changing technology, VETS’ vision is to find innovative ways to maximize the effectiveness of its efforts. VETS’ strategic plan states that it will seek new and effective means to help veterans compete successfully for better paying career jobs—helping them get on a track that can provide improved income stability and growth potential. VETS provides states with grants for DVOP and LVER staff according to the formula outlined in the law. The grant agreements include assurances by states that the DVOP and LVER staff members serve eligible veterans exclusively. Under federal law, all employment service staff must give priority to serving veterans, and the assignment of DVOP and LVER staff to local offices does not relieve other employment and training program staff of this requirement. The law prescribes various duties to DVOP and LVER staff members that are intended to provide veterans with job search plans and referrals and job training opportunities. While the state-employed DVOP and LVER staff are the front-line providers for services to veterans, VETS carries out its responsibilities, as outlined in the law, through a nationwide network that includes regional and state representation. The Office of the Assistant Secretary for Veterans’ Employment and Training administers the DVOP and LVER staffing grants through regional administrators and directors in each state, the District of Columbia, Puerto Rico, and the Virgin Islands. In larger states, an assistant director is appointed for every 250,000 veterans in the state. These federally paid VETS staff ensure that states carry out their obligations to provide service to veterans, including the services provided under the DVOP and LVER grants. To ensure priority service to veterans, VETS expects states to provide employment and training services to veterans at a rate exceeding the service provided to nonveterans. For example, VETS requires that veterans receive services at a rate 15 percent higher than nonveterans. Thus, if a state’s placement rate for nonveterans was 10 percent, the placement rate for veterans should be 11.5 percent, or 15 percent higher than the nonveteran placement rate. There are also greater expectations for serving Vietnam-era veterans and disabled veterans. As required by law, VETS must report to the Congress on states’ performance in five service categories. Historically, VETS has used these same performance categories to measure state performance for serving veterans at a higher rate than nonveterans. The performance categories include: (1) veterans placed in or obtaining employment; (2) Vietnam-era veterans and special disabled veterans placed in jobs on the Federal Contractor Job Listing; (3) veterans counseled; (4) veterans placed in training; and (5) veterans who received some reportable service. In our past reviews of VETS’ programs, we have recommended changes to VETS’ performance measures and plans. Recently, we have noted that VETS had proposed performance measures that were more in-line with those established under WIA; the measures focused more on what VETS’ programs achieve and less on the number of services provided to veterans relative to nonveterans. Although the law still stipulates that VETS is to report to the Congress on the five service categories, VETS plans to eliminate the requirement that states compare services provided to veterans with those provided to nonveterans. However, we have reported that VETS still lacked measures to gauge the effectiveness of services or whether more staff-intense services helped veterans obtain jobs. Veterans receive priority employment services at one-stop centers as required under the law, but the effectiveness of these services cannot be determined. Based on state-gathered data reported to VETS and interviews with state officials, we found that veterans generally received employment service at a higher rate than nonveterans. However, the effectiveness of these services is unknown because VETS lacks adequate outcome data such as information on job retention and wages. The only outcome data collected—the percentage of veterans served entering employment—are often collected inconsistently from state to state. Priority service to veterans at one-stop centers is usually demonstrated by the higher rates of service for veterans as compared with those for nonveterans. Most one-stop centers provide priority services to veterans through the DVOP and LVER staff who can provide an elevated level of service to veterans. Because veterans have these dedicated staff to serve them, they also receive more intensive services, and receive these services more readily, than nonveterans. Other examples of priority service include identifying and contacting qualified veterans before the universal population has access to employers’ job openings that will be posted on the states’ job database. States may have other special services exclusively for veterans, such as designated computers or special information packets on available resources. While priority service can be provided in different ways depending on the one-stop center, most state officials and one-stop center managers we spoke with said that they primarily used DVOP and LVER staff to provide priority service to veterans since these staff must assist veterans exclusively. DVOP and LVER staff members have smaller caseloads than other employment services staff and, consequently, have more time to spend with individuals. Veterans also have better access to intensive services, such as counseling and case management, than nonveterans because DVOP and LVER staff are funded independently of WIA and are not subject to restrictions applicable to WIA-funded programs. According to many state officials as well as DVOP and LVER staff, the DVOP and LVER staff members relate better to veterans because they are generally veterans themselves. For example, because they are familiar with the processes at the Department of Veterans Affairs (VA), DVOP and LVER staff can more easily help veterans file disability claims with the VA or help them to receive the appropriate disability benefits. While veterans received priority employment services at one-stop centers, VETS does not currently collect appropriate data for determining the effectiveness of these services and the agency lacks sufficient employment outcome data that would indicate whether services provided to veterans were effective. VETS has proposed changes to its performance measures, such as requiring states to report job retention, but will not implement these changes until July 1, 2002. In past reviews, we have pointed out that VETS’ use of relative standards comparing the percentage of veterans entering employment with that of nonveterans is not effective. This comparison results in states with poor levels of service to nonveterans being held to lower standards for service to veterans than states with better overall performance. The only outcome data that states currently report to VETS—the percentage of veterans entering employment after registering for employment services—is collected inconsistently from state to state. Some states compare their employment service registration records with unemployment insurance wage records, but others may simply call employers for employment verification or send postcards or letters to customers asking whether they have obtained employment. Some DVOP and LVER staff had more time than other employment and training staff for follow-ups by telephone or mail, resulting in more complete employment data for some veterans. In addition, states and local workforce investment areas choose to register customers at different stages of the job search process, thus the percentage of “registered” veterans entering employment may differ based on when they were required to register. In some areas, customers register to use any service, including self-service; in other areas, they are only required to register when using staff-assisted services. Those who find employment before being registered are not counted as having entered employment after using self-service resources available through the one- stop center. Consequently, the reported percentage of veterans served who entered employment is not comparable from state to state. Despite recently proposed improvements to its performance measures, VETS’ overall management of the DVOP and LVER grants is ineffective because the agency does not have a comprehensive system in place to manage state performance in serving veterans with these grants. VETS does not effectively communicate performance expectations to states because its goals and measures are unclear. In addition, the agency does not have meaningful incentives to encourage states to perform well. Furthermore, VETS is required by law to have federal staff in every state and to conduct annual on-site evaluations at every local office, but this monitoring is often unproductive. In order to oversee a program effectively, an agency must have a performance management system that establishes clear goals for those administering the program; however, VETS does not communicate a consistent message to states on expected performance. In fact, the agency does not have clear goals that it communicates to states or that it tracks with outcome data. For example, while one agency goal is to provide high- quality case management to veterans, the agency does not have state performance measures for assessing the quality of case management provided to veterans. Furthermore, VETS’ efforts to focus intensive services on those veterans most in need by “targeting” specific groups of veterans are unfocused. In its strategic plan, the agency, for case management and intensive services, targets disabled veterans, minority veterans, female veterans, recently separated veterans, veterans with significant barriers to employment, special disabled veterans, homeless veterans, veterans provided vocational rehabilitation under the VA, and veterans who served on active duty in the armed forces under certain circumstances. This targeting includes nearly all veterans, and not necessarily those most in need of service. The numerous categories of targeted veterans could result in the vast majority of veterans being targeted for case management. A VETS official said that the focus for service should be on veterans with the greatest needs as determined by the individual assessments because groups targeted on a national level do not necessarily correlate to the needs of veterans in particular states or local areas. Unnecessary performance measures from VETS add to the DVOP and LVER workload, without measuring quality of service to veterans. For example, some state and VETS officials we spoke with expressed concern about having performance measures that specifically focus on service to Vietnam-era veterans. These veterans make up such a small percentage of the workforce, due in part to the fact that many are at or near retirement age and may not be seeking employment, yet DVOP and LVER staff may spend much of their time trying to identify and serve this group of veterans in order to meet VETS’ performance goals. State officials also identified one of VETS’ performance measures that should be eliminated. VETS requires that Vietnam-era veterans, special disabled veterans, and veterans who served on active duty under certain circumstances are placed in jobs on the Federal Contractor Job Listing. To do this, in addition to identifying qualified job candidates from this pool of particular veterans, DVOP and LVER staff must monitor local federal contractors to make sure that they are listing their job opportunities with the one-stop centers on the Federal Contractor Job Listing and hiring these veterans. Because the presence of federal contractors in a given state or local area is unpredictable and is determined by the federal agencies awarding contracts, state employment service officials said the federal contractor measure should be eliminated. It is the responsibility of contractors to list their job openings, and the Office of Federal Contract Compliance Programs is responsible for ensuring that these companies list their jobs with state employment service offices and take affirmative action to hire qualified veterans. Eliminating this performance measure would allow DVOP and LVER staff members more time to focus on the employment needs of individual veterans rather than compliance issues under the purview of another federal agency. For effective oversight, in addition to having clear goals, an agency must provide incentives for meeting the goals and VETS’ performance management system lacks meaningful incentives to encourage states to perform well. Presently, states are neither rewarded for meeting or exceeding their performance measures, nor penalized for failing to meet these measures. If a state fails to meet its performance measures, VETS simply requires the state to develop a corrective action plan to address the deficiencies in that state and there are no financial repercussions. States will not lose funding for failing to adequately serve veterans, and an agency official noted that taking funds away from a state would ultimately deny services to veterans. On the other hand, VETS does not encourage fiscal compliance with the grants, and a state can overspend DVOP or LVER funds and submit a grant modification requesting additional funds. A VETS official suggested that if the grants were awarded through a competitive bid process within states, the grantees might have a greater incentive to improve services to veterans. To provide effective oversight, an agency must also gauge the quality of service offered by the program and monitor the programs’ progress. As prescribed by the law, VETS has federal staff in every state to monitor, along with other duties, the DVOP and LVER grants. However, this federal monitoring effort, which includes on-site evaluations at every local office, is often unproductive, and state officials characterize the DVOP and LVER grants as being “micro-managed” by VETS. The agency’s annual on-site evaluations of employment services offices that we observed or whose reports we reviewed produced few substantive findings by VETS staff. Furthermore, according to some state officials, these evaluations have little or no effect on how DVOP and LVER staff members perform their duties. Finally, we found multiple problems with VETS’ monitoring efforts. For example, because states generally monitor performance at one-stop centers, including the DVOP and LVER grants, VETS’ monitoring can be redundant. VETS’ requirement for annual on-site monitoring may also be unnecessary for those offices that exceed their performance expectations. In addition, VETS’ oversight may result in confusion about the lines of authority between the federal and state monitoring staff and the DVOP and LVER staff, who are state employees. Also, VETS’ monitoring is often inconsistent because operational manuals are outdated, training of monitoring staff is limited, and interpretations of the law differ among staff. According to the state and local officials we interviewed, the DVOP and LVER grant programs do not always operate well in one-stop centers. DVOP and LVER programs continue to operate under a law established prior to WIA, and states do not have the same flexibility granted under WIA to design their services for veterans in a way that best meets the needs of employers and veterans. Because of statutory requirements, states cannot, in all cases, assign DVOP and LVER staff to where the staff is most needed. For example, the law prescribes how to assign DVOP and LVER staff to local offices and does not give states the flexibility to move staff to locations where state and local officials believe veterans could best be served. This restriction may result in too many staff in some areas and too few in other areas. In addition, because DVOP and LVER grants are separate funding streams, states have little flexibility in staffing decisions. If a state does not spend all of its grant money, states return the extra funding and VETS redistributes it to states that request additional funding. A state that overspends in its DVOP program but spends less than its allocation in the LVER program would have to use other funds to cover the amount overspent in the DVOP program, and VETS would take back the additional LVER grant money. The state may request more money from VETS for its DVOP program, but there is no guarantee that it will get the additional funding. States are also constrained when it comes to deciding what DVOP and LVER staff members do and whom they serve. The law specifies the separate duties for DVOP and LVER staff, although we found that they generally performed similar duties. Furthermore, DVOP and LVER staff members may not serve certain individuals who may qualify for veteran services under other employment and training programs. The law governing the DVOP and LVER programs defines veterans eligible for employment assistance more narrowly than WIA or VETS for its other veterans’ activities. Because of this more restricted definition, DVOP and LVER staff are not allowed, for example, to serve veterans who were on active duty for 180 days or less, and they are not permitted to serve Reservists or National Guard members. Another sign that the DVOP and LVER grants are not well integrated into the one-stop environment is that the funding year for DVOP and LVER programs does not coincide with the funding year for other employment programs offered in the one-stop center system. The appropriation to fund the DVOP and LVER grants is made available on a federal fiscal year basis—October 1 through September 30—while other employment programs and states operate on a program year basis—July 1 through June 30. Having Labor programs’ funding streams on different schedules is burdensome for states and makes the budgeting process more complicated. VETS has taken a more reactive rather than proactive approach to adapting to the one-stop system and has not taken adequate steps to adapt the DVOP and LVER programs to the new environment. For example, instead of coordinating with other programs to determine how best to fit the DVOP and LVER programs into the one-stop system, VETS officials reported that they are waiting to see how states implement their programs and will then decide how to integrate the staff or adjust their programs. VETS has required states to sign an agreement to ensure that veterans will continue to receive priority services, but these agreements contained no insightful information about how DVOP and LVER staff might serve veterans within this new one-stop center environment. VETS has not developed practices for operating within the one-stop system or adequately shared innovative ways to help veterans find and retain jobs. Because of outdated policies and procedures, DVOP and LVER staff in many states may continue to operate separately as if they were in the old employment services system and continue to assume duties very similar to those they had in the old employment services system. Consequently, they fail to adapt to the new workforce environment created by WIA. According to one-stop managers we interviewed, this failure to adapt may diminish the quality of services to veterans. While the Congress has clearly defined employment service to veterans as a national responsibility, the law has not been amended to reflect the recent changes in the employment and training service delivery system introduced by WIA. The prescriptive nature of the law also creates a one- size-fits-all approach for service delivery, mandating many of the DVOP and LVER program activities and requirements. This approach is ineffective because it does not account for the fact that each state and one-stop center may have a different approach to satisfying the needs of local employers as well as different types of veterans who may need employment assistance. Although the law stipulates separate roles and responsibilities for DVOP and LVER staff, they perform similar duties and may not need to be separately funded. The law that governs VETS also stipulates how grant funds and staff must be allocated as well as how the grants should be monitored. These requirements hamper VETS’ ability to consider alternative ways of administering or overseeing the grants. Furthermore, the law requires that VETS report annually on states’ performance for serving veterans relative to serving nonveterans, which may not be a good indicator if a state serves its nonveteran population poorly. The law also requires VETS to report on requirements pertaining to the Federal Contractor Job Listing and this detracts DVOP and LVER staff members from serving veterans. While VETS’ vision is to find innovative ways to assist veterans with employment, it has not been proactive in helping DVOP and LVER staff become an integral part of the one-stop center environment. The new one- stop center system, while giving veterans priority for employment services, gives states flexibility in planning and implementing employment and training systems and holds them accountable for performance. However, VETS has not taken steps to adjust to this new environment. The agency has not updated its oversight guidelines of staff training procedures to ensure consistent and effective monitoring of the DVOP and LVER programs within the one-stop centers. VETS has not established clear performance goals for states, nor has it given states the flexibility to decide how best to serve their veteran population. VETS has proposed ways of improving performance measures, but these measures have not yet been implemented. VETS has not proposed any incentives to hold states accountable for meeting performance goals. Our report recommended that the Secretary of Labor direct VETS to establish more effective management and monitoring of the DVOP and LVER programs by allowing states flexibility in planning how to best serve veterans, while at the same time holding states accountable for meeting the agency’s goals and expectations. Specifically, our report recommended that the Secretary of Labor implement a more effective performance management system as soon as possible and take steps to ensure that the DVOP and LVER programs are more effectively monitored. In addition, because title 38 limits the amount of flexibility that VETS can grant to states, we recommended that Congress consider how the DVOP and LVER programs best fit in the current employment and training system and take steps to ensure that these programs become more fully integrated into this new environment. These steps may include updating the applicable law to provide more flexibility and taking other actions such as eliminating certain requirements and adjusting the DVOP and LVER grant funding cycle to correspond with that of other programs. Specifically, we suggested that the Congress consider revising title 38 to provide states and local offices more discretion to decide where to locate DVOP and LVER staff and provide states the discretion to have half-time DVOP positions; allow VETS and/or states the flexibility to better define the roles and responsibilities of staff serving veterans instead of including these duties in the law; combine the DVOP and LVER grant programs into one staffing grant to better meet states’ needs for serving veterans; provide VETS with the flexibility to consider alternative ways to improve administration and oversight of the staffing grants, for example, eliminating the prescriptive requirements for monitoring DVOP and LVER grants; eliminate the requirement that VETS report to the Congress a comparison of the job placement rate of veterans with that of nonveterans; and eliminate the requirement that VETS report on Federal Contractor Job Listings.
The Department of Labor's (DOL) Disabled Veterans' Outreach Program (DVOP) and Local Veterans' Employment Representative (LVER) program allow states to hire staff members to serve veterans exclusively. The two programs are mandatory partners in the new one-stop center system created in 1998 by the Workforce Investment Act, which requires that services provided by numerous employment and training programs be made available through one-stop centers. The act also gives states the flexibility to design services tailored to local workforce needs. Although the DVOP and LVER programs must operate within the one-stop system, the act does not govern the programs--and the law that governs them does not provide the same flexibility that the act does. Because Congress sees employment service for veterans as a national responsibility, it established the Veterans' Employment and Training Service (VETS) to ensure that veterans, particularly disabled veterans and Vietnam-era veterans, receive priority employment and training opportunities. To make better use of DVOP and LVER staff services, VETS needs the legislative authority to grant each state more flexibility to design how this staff will fit into the one-stop center system. VETS also needs to be able to hold states accountable for achieving agreed upon goals. Veterans receive priority employment service at one-stop centers as required under the law, but the effectiveness of the services, as indicated by the resulting employment, cannot be determined because VETS does not require states to collect sufficient data to measure outcomes. VETS does not adequately oversee the DVOP and LVER program grants because it does not have a comprehensive system in place to manage state performance in serving veterans. VETS has not adequately adapted the DVOP and LVER programs to the new one-stop environment and determined how best to fit them into the one-stop system.
For nearly 25 years, the United States has provided the Cuban people with alternative sources of news and information. In 1983, Congress passed the Radio Broadcasting to Cuba Act to provide the people of Cuba, through Radio Martí, with information they would not ordinarily receive due to the censorship practices of the Cuban government. Subsequently, in 1990, Congress authorized BBG to televise programs to Cuba. According to BBG, the objectives of Radio and TV Martí are to (1) support the right of the Cuban people to seek, receive, and impart information and ideas through any media and regardless of frontiers; (2) be effective in furthering the open communication of information and ideas through use of radio and television broadcasting to Cuba; (3) serve as a consistently reliable and authoritative source of accurate, objective, and comprehensive news; and (4) provide news, commentary, and other information about events in Cuba and elsewhere to promote the cause of freedom in Cuba. OCB employs several avenues to broadcast to Cuba, including shortwave, AM radio, and television through various satellite providers and airborne and ground-based transmitters (see fig. 1). IBB’s international broadcasters generally must comply with the provisions of the U.S. Information and Educational Exchange Act of 1948 (commonly known as the Smith-Mundt Act), as amended, which bars the domestic dissemination of official American information aimed at foreign audiences. In 1983, however, the Radio Broadcasting to Cuba Act authorized the leasing of time on commercial or noncommercial educational AM radio broadcasting stations if it was determined that Radio Martí’s broadcasts to Cuba were subject to a certain level of jamming or interference. Similarly, in 1990, the Television Broadcasting to Cuba Act authorized BBG to broadcast information to the Cuban people via television, including broadcasts that could be received domestically, if the receipt of such information was inadvertent. BBG has interpreted the act to allow OCB to use domestic television stations. In fiscal year 2007, OCB obligated over $35 million in support of its mission. As shown in figure 2, OCB obligated about 50 percent of this amount to salaries, benefits, and travel for OCB employees and 41 percent on mission-related contracting efforts. OCB obligated nearly $3 million to procure talent services. Federal statutes require, with certain limited exceptions, that contracting officers shall promote and provide for full and open competition in soliciting offers and awarding government contracts. The FAR states that full and open competition, when used with respect to a contract action, means that all responsible sources are permitted to compete. The process is intended to permit the government to rely on competitive market forces to obtain needed goods and services at fair and reasonable prices. When not providing for such competition, the contracting officer must, among other things, justify the reason for using other than full and open competition, solicit offers from as many potential sources as is practicable under the circumstances, and consider actions to facilitate competition for any subsequent acquisition of supplies or services. For contracts that do not exceed the simplified acquisition threshold—currently $100,000 with limited exceptions—contracting officers are to promote competition to the maximum extent practicable. In December 2006, IBB awarded contracts to two Miami-based radio and television broadcasting stations, Radio Mambi and TV Azteca, to broadcast Radio and TV Martí programming, respectively. IBB justified the use of other than full and open competition on the basis of two specific statutory authorities cited in the FAR—that there was only one responsible source capable of meeting the agency’s needs and that there was an unusual and compelling urgency to award the contract. Table 1 provides selected information on the two contracts. OCB’s talent services contracts typically fall below the simplified acquisition threshold and therefore are solicited and awarded directly by OCB. OCB generally awards each talent services contractor a blanket purchase agreement, which provides OCB a simplified method of obtaining specific services as needed during the course of the year. On a quarterly basis, OCB places orders against the agreements, specifying the anticipated amount of services required during that period. While certain competition requirements do not apply below the simplified acquisition threshold, contracting officers are to promote competition to the maximum extent practicable. IBB’s approach for awarding the Radio Mambi and TV Azteca contracts did not reflect sound business practices in certain key aspects. IBB’s approach was predicated on the confluence of several interrelated events—ongoing interagency deliberations, the issuance of a July 2006 report by the Commission for Assistance to a Free Cuba, and concerns about the health of Fidel Castro. According to BBG and IBB officials, these events required a course of action to obtain additional broadcasting services to Cuba quickly by using other than full and open competition. In certain respects, however, IBB did not document in its contract files key information or assumptions underlying its decisions to not seek competitive offers, limit the number of potential providers it considered, or the basis used to negotiate the final prices for the services provided. In addition, IBB did not actively involve its contracting office until just prior to contract award. Finally, while justifying the December 2006 award of the two contracts on the basis of urgent and compelling need and the determination that only one source would meet its minimum needs, IBB chose to exercise multiple options on the two contracts to extend their period of performance into 2008 and has only recently taken steps to identify additional providers. Our prior work has found that establishing a valid need and translating that into a well-defined requirement is essential for federal agencies to obtain the right outcome. Our review of IBB’s contract files and interviews with program and contracting officials identified several interrelated events that established the need to increase radio and television broadcasting to Cuba. BBG and IBB officials noted that beginning in the spring of 2006, agency officials were involved in interagency discussions with officials from the Department of State, the Department of Defense, the National Security Council, the U.S. Agency for International Development, and other agencies on the need to expand broadcasting options. These discussions coincided with the issuance of the July 2006 report by the Commission for Assistance to a Free Cuba, which recommended funding the transmission of TV Martí by satellite television. The report did not provide a time frame in which this was to be completed, nor did it address expanding radio broadcasting. Additionally, BBG and IBB officials noted that shortly after the release of the report there was widespread concern about the health of Fidel Castro and that his death could result in unrest in Cuba, adding to the urgency to find additional ways to broadcast news to Cuba. IBB officials told us that based on their internal deliberations and discussions with other agencies involved, it was clear that the expectation was for IBB to quickly identify additional broadcasters, including radio providers, and award the resulting contracts quickly to address potential unrest in the event of a transfer of power. IBB officials were unable to provide documentation of certain classified aspects of the deliberations, or the specific time frame in which these activities were to be completed. Given this expectation, IBB subsequently decided against seeking competitive offers from radio and television broadcasters. IBB officials told us they believed they could not do so for several reasons, including concerns that publicly seeking competitive offers would not yield responses from potential service providers that met its needs; advertising its plans would alert the Cuban government of IBB’s intentions, which might enable Cuba to jam the new broadcasts; and IBB had not discussed its plans with cognizant congressional committees and, in particular, its efforts to comply with the Smith- Mundt Act and other relevant legislation. IBB officials determined that they would limit the number of providers they would consider and quickly developed a set of basic requirements that broadcasters would need to meet. For radio, IBB wanted a Spanish- language station with the strongest AM signal to reach as much of Cuba as possible. To do so, officials with IBB’s Office of Marketing and Program Placement stated they reviewed a prior consulting study and broadcasting databases maintained by the Federal Communications Commission, and consulted with OCB on Cuban listening habits. For television, IBB wanted a station with a limited domestic audience and one that had a contract with DirecTV, since the DirecTV signal can be received in Cuba. At IBB’s request, OCB provided a list of Miami channels carried by DirecTV, highlighting three Spanish-language stations and one English-language station for consideration. BBG and IBB officials subsequently told us that their decision to limit television broadcasters to the Miami area was based on information that indicated that DirecTV receivers in Cuba likely came from the Miami area and therefore were programmed to receive only Miami television stations. A senior IBB official provided IBB’s Office of Engineering a list of four radio and three television stations and requested that the office assess the extent to which the television stations’ signals were viewable in the United States and the extent to which the radio stations’ signals would reach Cuba. Office of Marketing and Program Placement officials then made two trips to Miami to meet with and determine these stations’ willingness to broadcast Radio and TV Martí programming. In making their recommendation for a radio broadcaster to a senior IBB official, Marketing and Program Placement officials concluded that Radio Mambi provided the most powerful signal among those stations surveyed that could reach most of Cuba. IBB officials acknowledged, however, that this station was likely jammed in Havana as there is a Cuban station that broadcasts on the same frequency as Radio Mambi. IBB officials believed that broadcasting on two frequencies that cover most of Cuba would be the most effective way to overcome the Cuban government’s jamming efforts. In recommending a television broadcaster, Marketing and Program Placement officials initially recommended a television station (other than TV Azteca) based on the station’s verbal offer to split its DirecTV signal, enabling it to broadcast TV Martí programming only to Cuba and not to domestic audiences. According to IBB officials, subsequent to their recommendation the television broadcaster withdrew its offer once it determined it could not split its DirecTV signal and was unwilling to sell time for which it already had programs. Consequently, IBB decided to contract with TV Azteca. While TV Azteca’s broadcasts could be viewed by domestic audiences, IBB officials believed that since its signal covered a small domestic area, it better met the intent of the Smith-Mundt Act to limit the extent to which broadcasts intended for foreign audiences could be received domestically. In a sole source environment, the government cannot rely on market forces to determine a fair and reasonable price and therefore must conduct market research to do so. As part of its market research, IBB officials in August and September 2006 asked a consultant to gather pricing information from the radio and television stations being considered without identifying IBB as the potential buyer. The consultant forwarded prices quoted by seven radio and television stations which varied significantly in terms of the dates, time slots, and prices offered. For instance, the information quoted by the television stations was based on one-half hour “infomercials,” which IBB officials believed was useful to gauge the relative prices that might be offered by the stations, but was of only limited value to negotiate specific prices for the actual programming it sought to broadcast. While the contract files did not provide the basis by which IBB determined the final prices, IBB officials stated that after they had selected Radio Mambi and TV Azteca, the stations provided quotes for various broadcast times, which IBB officials used to reach a final price agreement. According to BBG’s acquisition regulations, programming offices should discuss a prospective request for other than full and open competition with the contracting office as early as possible during the acquisition planning stage. The regulations note that these discussions may resolve uncertainties, provide offices with names of other sources, and allow proper scheduling of the acquisition, among other benefits. Further, our prior work has found that to promote successful acquisition outcomes, stakeholders with the requisite knowledge and skills must be involved at the earliest point possible. This helps ensure that the acquisition is executable and tailored to the level of risk commensurate with the individual transaction. We found, however, that neither the contracting or legal offices were actively involved in developing the acquisition strategies for the radio and television broadcasting services, nor were they involved in developing or reviewing the terms and conditions until very late in the acquisition process. For example, according to IBB officials, the substance of the agreements with the radio and television broadcasters was generally completed by mid-October 2006. However, the contracting officer who awarded the contracts indicated he was not made aware of the planned acquisition until Friday, December 1. At that time, IBB notified the contracting officer to prepare to award the contracts as early as the following Monday, based on the terms and conditions that Marketing and Program Placement officials had agreed to with the broadcasters. According to representatives from IBB’s contracting and legal offices, they had been unaware of the proposed contract actions until that time, precluding their ability to provide input into the acquisition strategy or to assess the potential for conducting a more robust competition. As a result, the contracting office’s role appeared to be limited to verifying the terms and conditions that the programming officials had reached with the broadcasters during their internal assessment. When agencies cite urgency as the basis for using other than full and open competition, the FAR requires them to describe the actions, if any, the agencies will take to remove the barriers for competition before any subsequent acquisition for the services is required. IBB, however, had taken few steps to determine how it might compete future broadcasting requirements. Rather, IBB extended the radio broadcasting contract by just over 8 months through February 2008, when it ended the contract due to budget constraints. Similarly, IBB exercised two options to extend the television broadcasting contract by a total of 12 months to June 2008. IBB’s contracting officer told us that, in his opinion, by December 2007 IBB had sufficient knowledge of its requirements, and sufficient time to plan for and conduct a full and open competition, if IBB continued to require these services. As an interim measure, on April 25, 2008, agency officials advertised in Federal Business Opportunities their intention to exercise the final option with TV Azteca to extend contract services into December 2008. In the notice, IBB officials identified the specific times during which TV Martí programming was being broadcast and the other services provided by TV Azteca and requested that interested firms submit adequate documentation of their capability to provide these services. While the notice did not constitute a solicitation and IBB was not seeking proposals, quotes, or bids, the notice indicated that if IBB received responses it might consider competing the contract rather than exercise the option. IBB received no responses to the notice within the 15-day time period allowed. OCB’s practices provide limited visibility into key steps in soliciting, evaluating, and selecting its talent services contractors. In that regard, OCB does not require that managers document instances in which resumes were received from sources other than formal solicitation means nor require that managers document their evaluation of the resumes received. IBB officials told us they would expect that pursuant to their guidance the contract files would contain such documentation. Additionally, OCB relies on the rates provided by IBB’s Contracting for Talent and Other Professional Services Handbook when justifying what it pays for talent services. The usefulness of the handbook’s pricing guidance, however, may be limited as the rates in the handbook are neither current nor based on the local Miami market and because OCB, at times, has reduced the rates it pays due to budget constraints. In general, OCB officials believe that they are paying their talent services contractors below market rates. Both the FAR and IBB guidance require that contracts be competitively solicited and awarded. To identify qualified contractors, OCB seeks resumes through three different means of solicitation: (1) quarterly Federal Business Opportunities notices, (2) annual advertisements in the Miami Herald newspaper, and (3) public building notices in OCB’s lobby. These solicitations generally identify the wide range of services OCB requires annually, but do not specify the amount of work required or when the work may be needed. According to OCB officials, these solicitations result in a continuous stream of resumes throughout the year that are directed to its contracting office. Overall, an OCB official estimated OCB received over 600 resumes in 2006. Contracting officials group the resumes into talent and production services categories and distribute them to the pertinent radio, television, and technical managers. OCB’s practices, however, provide limited visibility into the source of the resumes it receives, including those that may be received from outside the formal solicitation processes. In 31 of the 37 contract files we reviewed, OCB provided copies of all three formal solicitations to document compliance with competitive solicitation requirements. In at least three instances, however, the file contained this documentation even though the managers we interviewed stated that the resume was obtained through a recommendation from an OCB employee or contractor. In one case, for example, a manager noted that the broad nature of the solicitation did not provide suitable candidates for a specific requirement. Consequently, the manager solicited referrals from colleagues and through this means found a contractor who met the requirement. All of the seven managers we spoke with indicated that they have received, at one point or another, resumes from outside of the formal solicitation process. A senior OCB official stated that OCB does not, however, require program managers to document when resumes are received outside of the formal solicitation process. A senior IBB contracting official stated that any resume received informally should be sent to OCB’s contracting office, which in turn should distribute it to the relevant managers for consideration along with all of the other resumes received. Further, OCB managers do not document their evaluation of the resumes they review or their rationale for selecting one contractor over another. After receiving resumes from the contracting office, the managers are responsible for evaluating the resumes and, in turn, recommending contractors for award. Six managers said that they reviewed the resumes they received to different degrees. For example, some managers indicated that they always reviewed the resumes obtained through OCB’s formal solicitations when selecting a contractor, though none documented their assessments. On the other hand, three managers indicated that they have selected contractors based on the recommendation of an OCB employee without reviewing other resumes. While each of the 37 contract files we reviewed documented the rationale for selecting the contractor, there was no documentation to indicate that other potential contractors were considered. The Contracting for Talent and Other Professional Services Handbook requires, however, that contracting personnel maintain contract files, which must contain a justification for the contractor selected along with an evaluation of prospective contractors. OCB contracting officials told us that the guidance was not clear on how OCB was to meet this requirement. Senior IBB contracting officials with whom we spoke told us that pursuant to the guidance in the handbook, they would expect to see documentation in OCB’s contract files of all the contractors considered and their rationale for selecting one contractor over another. After selecting a talent services contractor for award, OCB managers generally rely on the price ranges established in IBB’s handbook to justify the price OCB will pay for the service. In that regard, we found that the rates for each of the 37 contracts we reviewed were within or below the rates established in IBB’s handbook. OCB managers explained that the rates actually paid may fall below IBB’s guidance because OCB’s budgetary resources limit what it can afford to pay for these services. For example, an OCB manager told us that in February 2008 the decision was made to reduce programming costs, including decreasing the rates paid to many contractors, to stay within OCB’s budget. The usefulness of the handbook’s pricing guidance may be further limited given that the rates reflected in the handbook are not based on the local Miami market and are not current. For example, the market research used to support the rates for on- and off-camera performers was dated between October 2000 and February 2001 and only referenced prices in the Washington D.C., and Baltimore, Maryland, areas. IBB indicated that it is in the process of determining how to update its handbook, including its guidance on how managers and contracting officers are to use the rates when establishing prices for specific contracts. For their part, OCB officials believed that they were paying less than the local market rate for talent services. Competition is a fundamental principle underlying the federal acquisition process, as it allows federal agencies to identify contractors who can meet their needs while allowing the government to rely on market forces to obtain fair and reasonable prices. The competition laws and regulations provide agencies considerable flexibility to use noncompetitive procedures, if adequately justified, to meet their needs and permit agencies to use less rigorous procedures for lower dollar acquisitions. In certain respects, however, IBB’s and OCB’s practices to award the contracts we reviewed lacked the discipline required to ensure transparency and accountability for its decisions with regard to these matters. IBB did not fully document information or assumptions underlying its decisions, involve its contracting office in a timely manner, or actively take steps to promote competition on future efforts. Similarly, OCB’s practices do not fully adhere to the requirements established by IBB’s handbook to document important steps in soliciting and awarding talent services contracts, in part due to questions about how to meet the handbook’s requirements. Furthermore, the pricing guidance in the handbook may be of limited use as a tool to justify prices paid to talent services contractors. Collectively, these weaknesses underscore the need for IBB and OCB to improve their practices to enhance competition, improve transparency, and ensure accountability. To better inform acquisition decisions, improve transparency, and ensure that competition is effectively utilized, we recommend that the Broadcasting Board of Governors direct IBB to take the following three actions: reinforce existing requirements to fully document information and assumptions supporting key decisions, such as when awarding contracts using other than full and open competition; reinforce existing policy for its programming staff to involve contracting personnel at the earliest possible time during the acquisition planning stage; and plan for full and open competition on any future contracts for radio and television broadcasting services that exceed the simplified acquisition threshold. With respect to improving IBB’s guidance governing contracts for talent services, we recommend that the Broadcasting Board of Governors direct IBB to take the following two actions: clarify requirements in IBB’s Contracting for Talent and Other Professional Services Handbook on the receipt and evaluation of resumes and ensure that OCB’s practices are consistent with IBB’s guidance, and determine how the pricing guidance in IBB’s handbook could better meet users’ needs as part of its planned revision to the handbook. In written comments on a draft of this report, BBG did not formally comment on our recommendations. BBG subsequently informed us that it did not take exception to our recommendations and has begun to take steps to implement them. In its written comments, BBG expressed concern that the draft report title may be misconstrued as an evaluation of the overall fitness of the agency’s contracting efforts. We modified the draft report title for additional clarity. BBG also noted that we did not have access to certain classified information, which BBG officials believed prevented them from fully illustrating the sense of urgency that surrounded their efforts to award the broadcasting contracts. We noted in the draft report that BBG was unable to provide documentation of certain classified aspects of the deliberations, but we did not question BBG’s determination that there was an urgent and compelling need to award the broadcasting contracts. Rather, we noted that the agency failed to follow sound practices in such areas as documentation, stakeholder involvement, and planning for future competition, practices that are required by federal or agency acquisition regulations, and were not related to or dependent on BBG’s disclosure of classified information. BBG also provided additional context for the actions it took in awarding the broadcasting contracts and OCB’s processes for awarding talent services. We believe the draft report reflected this information, but have, where appropriate, incorporated BBG’s comments. These comments are reprinted in appendix II. BBG also provided technical comments, which we incorporated where appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 15 days from the report date. At that time, we will send copies of this report to interested congressional committees; the Broadcasting Board of Governors; the Executive Director, Broadcasting Board of Governors; the Director, Office of Cuba Broadcasting; the Secretary of State; and the Director, Office of Management and Budget. This report will also be made available to others on request. This report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report or need additional information, please contact me at (202) 512-4841 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Staff acknowledgments are listed in appendix III. Our objectives were to evaluate the processes used (1) by the International Broadcasting Bureau (IBB) to award the Radio Mambi and TV Azteca contracts, and (2) by the Office of Cuba Broadcasting (OCB) to award its talent services contracts. For the purposes of this review, talent services contracts refer to those contracts awarded by OCB for writers, performers, program hosts, reporters, and technical support required to produce and broadcast radio and TV news and entertainment programming. To determine the laws and regulations governing the award of these contracts, we reviewed the Competition in Contracting Act, the Federal Acquisition Regulation, the Broadcasting Board of Governors’ (BBG) Acquisition Regulations, and IBB’s Contracting for Talent and Other Professional Services Handbook. Collectively, these provide guidance applicable to IBB and OCB on soliciting, evaluating, and awarding contracts above and below the simplified acquisition threshold of $100,000. We also reviewed the Smith-Mundt Act, as amended, as well as the Radio Broadcasting to Cuba Act and the Television Broadcasting to Cuba Act to determine the authority by which OCB may broadcast radio and television programming to Cuba. We did not specifically assess whether the award and the terms and conditions of the broadcasting contracts were in compliance with these acts. To evaluate the processes used by IBB to award the Radio Mambi and TV Azteca contracts, we reviewed the contract files to determine the information and assumptions supporting IBB’s decisions leading to the award of the two contracts in December 2006. As both contracts were awarded using other than full and open competition, we reviewed IBB’s justification and approval documents and other unclassified documentation supporting the solicitation process and award decision, including the July 2006 report of the Commission for Assistance to a Free Cuba. We also interviewed officials in IBB’s offices of Marketing and Program Placement, Engineering, and Contracts, as well as officials from BBG’s offices of General Counsel and Congressional Relations, to determine their roles and responsibilities to identify potential service providers and to negotiate and award the two contracts. Additionally, we interviewed the Director, OCB, and other senior OCB officials, as well as officials from the Department of State, to obtain information on their involvement with the award of these contracts. To assess the processes used by OCB to award its talent services contracts, we compiled information from the Federal Procurement Data System-Next Generation on the contracts awarded by OCB from fiscal years 2005 through 2007. This analysis identified 723 contracts or contract actions valued at over $3,000 for various goods and services. We then selected a stratified random selection of 37 talent services contracts— examining at least 10 from each year—for a more in-depth review. Because of our sample size, the results of our analysis of these contracts can not be generalized to describe the process used to award all of OCB’s contracts. Specifically, we reviewed the contract files to determine the extent to which the files contained documentation of the process used to solicit and evaluate resumes from potential talent services contractors. We also analyzed how the rates paid to the contractors compared against the rates recommended by IBB’s handbook. We also interviewed OCB program managers and senior contracting officials to obtain insight into how OCB determined its requirements and selected talent services contractors. We interviewed senior IBB officials to obtain information on how IBB’s handbook was developed and the procedures that OCB should follow when awarding talent services contracts. To provide context for OCB’s contracting activities, we analyzed budget and financial data provided by BBG’s Chief Financial Officer for fiscal year 2007 and verified our summary of the information using budget activities data provided by OCB’s Director of Administration. We conducted this performance audit from February 2008 through June 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following are GAO’s comments on the Broadcasting Board of Governors’ letter dated July 2, 2008. 1. As noted by BBG, the draft report provided to BBG correctly characterized the scope of GAO’s review. In that regard, we reviewed two broadcasting contracts awarded by IBB on behalf of OCB. Similarly, the draft report noted that the results of our analysis of OCB’s talent contracts are not generalizable to the processes used by OCB to award all of its contracts. Our work, however, does provide a sound basis for discussing OCB’s processes for awarding its talent services contracts, which are essential for providing the on-air talent, writers, and technical support services needed to produce and broadcast its programming. We modified the draft report title for additional clarity. 2. We noted in the draft report that BBG was unable to provide documentation of certain classified aspects of deliberations involving several agencies, including the National Security Council and the Departments of Defense and State. BBG officials stated that they were not authorized to disclose this information. As a result, BBG officials expressed concern that they were unable to fully illustrate the sense of urgency that surrounded their contracting efforts. We do not see this as a limitation to our scope, however, as we did not question BBG’s determination that there was an urgent and compelling need to award the broadcasting contracts. Rather, we noted that the agency failed to follow sound practices in such areas as documentation, stakeholder involvement, and planning for future competition, which are required by federal or agency acquisition regulations, and are not related to or dependent on BBG’s disclosure of classified information. We also note that the July 2006 report by the Commission for Assistance to a Free Cuba is unclassified and publicly available on the Commission’s Web site. 3. We believe the draft report appropriately reflected the context and process used by IBB to identify and award the radio and television broadcasting contracts. We do note, however, that the contract files do not make reference to the 2002 engineering study; rather, IBB officials provided that information during the course of our review to supplement the information in the files. We also note that the 2002 engineering report discussed only radio stations, and not television stations. In that regard, we found that OCB identified four local Miami television stations carried on DirecTV for IBB’s consideration in August 2006. 4. We stated in the draft report that as an interim measure to conducting a full and open competition, on April 25, 2008, agency officials advertised in Federal Business Opportunities their intention to exercise the final option with TV Azteca to extend contract services into December 2008. We note, however, that agency officials had not taken any action in this regard until we brought it to their attention during the course of our review that they were not in compliance with the notice requirements prescribed by the Federal Acquisition Regulation. 5. We believe the draft report appropriately reflected the context and process used by IBB to identify and award the radio and television broadcasting contracts. As the draft report noted, however, IBB did not document in its contract files key information or assumptions underlying its decision not to seek competitive offers, to limit the number of potential providers it considered, and to limit the basis used to negotiate final prices for the services provided. In these cases, IBB officials supplemented the information contained in the contract files by providing information and e-mails from their personal files. We do note that BBG’s description of TV Azteca as the best overall value to the government (factoring the broadcast schedule times, surrounding programming, as well as cost) is somewhat inconsistent with the information contained in the contract files and subsequently provided by IBB. Our review found that TV Azteca was the only television station with a feasible offer after a preferred station withdrew its offer, and thus became the basis for IBB’s determination that only one responsible source could meet its needs. In addition to the contact above, Timothy J. DiNapoli, Assistant Director; Katherine Trimble; Justin Jaynes; Leigh Ann Nally; Julia Kennon; and John Krump made key contributions to this report. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The United States has long provided the Cuban people with alternative sources of news and information. As part of this effort, in December 2006 the International Broadcasting Bureau (IBB) awarded sole-source contracts to two Miami radio and television stations--Radio Mambi and TV Azteca--to provide additional broadcasting options. Additionally, the Office of Cuba Broadcasting (OCB) annually awards millions of dollars in contracts for talent services--writers, reporters, and technical support--needed to produce and broadcast news and entertainment programming. GAO evaluated the processes used to award (1) the Radio Mambi and TV Azteca broadcasting contracts, and (2) talent services contracts. We reviewed contract files and other documentation and interviewed program managers and contracting officers to determine the process used to award the two broadcasting contracts and a nongeneralizable selection of 37 talent services contracts. IBB's approach for awarding the Radio Mambi and TV Azteca contracts did not reflect sound business practices. According to officials from IBB and the Broadcasting Board of Governors--IBB's and OCB's parent organization--the confluence of several interrelated events--ongoing interagency deliberations, the issuance of a July 2006 report by a Cabinet-level commission, and concerns about the health of Fidel Castro--required them to quickly obtain additional broadcasting services to Cuba. Competition laws and regulations provide agencies considerable flexibility to use noncompetitive procedures, if adequately justified, to meet their needs. In certain respects, however, IBB did not fully document in its contract files key information or assumptions underlying its decisions to not seek competitive offers, limit the number of potential providers it considered, or the basis used to negotiate the final prices for the services provided. Additionally, IBB did not actively involve its contracting office until just prior to contract award, though agency regulations and our prior work identify that timely involvement by stakeholders helps promote successful acquisition outcomes. Finally, though it partly justified its awards based on urgency, IBB exercised multiple options on the two contracts to extend their period of performance into 2008. Only recently has it taken steps to identify additional providers. OCB's practices for soliciting, evaluating, and selecting its talent contractors provide limited visibility at key steps. OCB issues quarterly announcements in Federal Business Opportunities, advertises annually in a local newspaper, and posts announcements at OCB's headquarters. OCB does not require, however, that managers document instances in which resumes were received from sources outside these processes, such as when a contractor is recommended by an OCB employee. Further, OCB does not document why other potential providers were not selected as required by IBB's guidance, in part due to questions about how to meet this requirement. Lastly, OCB managers use an IBB handbook to justify how much it pays for talent services, but the usefulness of the handbook's pricing guidance may be limited as the recommended rates are not current or based on the local market.
In 2004, DOJ estimated that American Indians experience rates of violent crime that are far higher than most other racial and ethnic groups in the United States. For example, DOJ estimated that across the United States, the annual average violent crime rate among American Indians was twice as high as that of African Americans, and 2-½ times as high as that of whites, and 4-½ times as high as that of Asians. Also, domestic and sexual violence against American Indian women is among the most critical public safety challenges in Indian country, where, in some tribal communities, according to a study commissioned by DOJ, American Indian women face murder rates that are more than 10 times the national average. Oftentimes, alcohol and drug use play a significant role in violent crimes in Indian country. According to DOJ, American Indian victims reported alcohol use by 62 percent of offenders compared to 42 percent for all races. Tribal or BIA law enforcement officers are often among the first responders to crimes on Indian reservations; however, law enforcement resources are scarce. BIA estimates that there are less than 3,000 tribal and BIA law enforcement officers to patrol more than 56 million acres of Indian country. According to a DOJ study, the ratio of law enforcement officers to residents in Indian country is far less than in non-tribal areas. In the study, researchers estimated that there are fewer than 2 officers per 1,000 residents in Indian country compared to a range of 3.9 to 6.6 officers per 1,000 residents in non-tribal areas such as Detroit, Michigan and Washington, D.C. The challenge of limited law enforcement resources is exacerbated by the geographic isolation or vast size of many reservations. In some instances officers may need to travel hundreds of miles to reach a crime scene. For example, the Pine Ridge Indian Reservation in South Dakota has about 88 sworn tribal officers to serve 47,000 residents across 3,466 square miles, which equates to a ratio of 1 officer per 39 square miles of land, according to BIA. In total there are 565 federally recognized tribes; each has unique public safety challenges based on different cultures, economic conditions, and geographic location, among other factors. These factors make it challenging to implement a uniform solution to address the public safety challenges confronting Indian country. Nonetheless, tribal justice systems are considered to be the most appropriate institutions for maintaining law and order in Indian country. Generally, tribal courts have adopted federal and state court models; however, tribal courts also strive to maintain traditional systems of adjudication such as peacemaking or sentencing circles. Law enforcement, courts, and detention/correction programs are key components of the tribal justice system that is intended to protect tribal communities; however, each part of the system faces varied challenges in Indian country. Shortcomings and successes in one area may exacerbate problems in another area. For example, a law enforcement initiative designed to increase police presence on a reservation could result in increased arrests, thereby overwhelming a tribal court’s caseload or an overcrowded detention facility. The exercise of criminal jurisdiction in Indian country depends on several factors, including the nature of the crime, the status of the alleged offender and victim—that is, whether they are Indian or not—and whether jurisdiction has been conferred on a particular entity by, for example, federal treaty or statute. As a general principle, the federal government recognizes Indian tribes as “distinct, independent political communities” that possess powers of self-government to regulate their “internal and social relations,” which includes enacting substantive law over internal matters and enforcing that law in their own forums. The federal government, however, has plenary and exclusive authority to regulate or modify the powers of self-government that tribes otherwise possess, and has exercised this authority to establish an intricate web of jurisdiction over crime in Indian country. The General Crimes Act, the Major Crimes Act, and Public Law 280, which are broadly summarized in table 1, are the three federal laws central to the exercise of criminal jurisdiction in Indian country. These laws as well as provisions of the Indian Civil Rights Act related to tribal prosecutions are discussed more fully in appendix II. The exercise of criminal jurisdiction by state governments in Indian country is generally limited to two instances, both predicated on the offense occurring within the borders of the state—where both the alleged offender and victim are non-Indian, or where a federal statute confers, or authorizes, a state to assume criminal jurisdiction over Indians in Indian country. Otherwise, only the federal and tribal governments have jurisdiction. Where both parties to the crime are Indian, the tribe generally has exclusive jurisdiction for misdemeanor-level offenses, but its jurisdiction runs concurrent with the federal government for felony-level offenses. Where the alleged offender is Indian but the victim is non-Indian, tribal and federal jurisdiction is generally concurrent. Finally, federal jurisdiction is exclusive where the alleged offender is non-Indian and the victim is Indian. Table 2 summarizes aspects of federal, state, and tribal jurisdiction over crimes committed in Indian country. DOI is one of two key federal agencies that have a responsibility to provide public safety in Indian country. Within DOI, BIA is assigned responsibility to support tribes in their efforts to ensure public safety and administer justice within their reservations as well as to provide related services directly or through contracts, grants, or compacts to 565 federally recognized tribes with a service population of about 1.6 million Indians across the United States. To that end, BIA’s Office of Justice Services manages law enforcement, detention, and tribal court programs. Specifically, within BIA’s Office of Justice Services, the Division of Law Enforcement supports 191 tribal law enforcement agencies and the Division of Corrections supports 91 tribal detention programs. About 90 BIA special agents are responsible for investigating crimes that involve violations of federal and tribal law that are committed in Indian country including crimes such as murder, manslaughter, child sexual abuse, burglary, and production, sale, or distribution of illegal drugs, among other criminal offenses. Following completion of an investigation, BIA special agents will refer the investigation to the USAO for prosecution. BIA reported that it distributed approximately $260 million of its fiscal year 2010 appropriation among tribal law enforcement and detention programs. Additionally, BIA reported that it funded maintenance and repair projects at four tribal detention centers totaling $6.5 million from amounts appropriated under the American Recovery and Reinvestment Act of 2009 (Recovery Act). Within BIA’s Office of Justice Services, the Division of Tribal Justice Support for Courts works with tribes to establish and maintain tribal judicial systems. This includes conducting assessments of tribal courts and providing training and technical assistance on a range of topics including establishing or updating law and order codes and implementing strategies to collect and track caseload data. BIA reported that it distributed $24.5 million to support tribal court initiatives in fiscal year 2010. Figure 1 depicts the key DOI entities and their respective responsibilities related to supporting tribal justice systems. DOJ also plays a significant role in helping tribes maintain law and order in Indian country and DOJ officials have stated that the department has a duty to help tribes confront the dire public safety challenges in tribal communities. Within DOJ, responsibility for supporting tribal justice systems falls to multiple components, including the FBI, which investigates crimes; the U.S. Attorneys’ Offices, which prosecute crimes in Indian country; and the Office of Justice Programs, which provides grant funding, training, and technical assistance to federally recognized tribes to enhance the capacity of tribal courts, among other tribal justice programs. Figure 2 depicts the key DOJ entities and their respective responsibilities related to supporting tribal justice systems. The FBI works with tribal and BIA police and BIA criminal investigators to investigate crime in Indian country. Currently, the FBI dedicates more than 100 FBI special agents from approximately 16 field offices to investigate cases on over 200 reservations, nationwide. According to the FBI, its role varies from reservation to reservation, but generally the agency investigates crimes such as murder, child sexual abuse, violent assaults, and drug trafficking, among other criminal offenses. FBI officials explained that approximately 75 percent of the crimes it investigates in Indian country include death investigations, physical and sexual abuse of a child, and violent felony assaults such as domestic violence and rape. Similar to BIA criminal investigators, FBI special agents refer criminal investigations to the USAO for prosecution; however, FBI officials explained that FBI agents may elect not to refer investigations that, pursuant to supervisory review, lack sufficient evidence of a federal crime or sufficient evidence for prosecution. Under the direction of the Attorney General, the USAO may prosecute crimes in Indian country where federal jurisdiction exists. Of the 94 judicial districts located throughout the United States and its territories, 44 districts contain Indian country. According to DOJ, approximately 25 percent of all violent crime cases opened each year by district USAOs nationwide occur in Indian country. In 2010, DOJ named public safety in Indian country as a top priority for the department. To that end, in January 2010, each USAO with Indian country jurisdiction was directed to develop operational plans that outline the efforts the office will take to address public safety challenges facing tribes within its district—particularly violence against women. The Bureau of Justice Assistance (BJA) within OJP is one of several DOJ components that provide grant funding, training, and technical assistance designed to enhance and support tribal government’s efforts to reduce crime and improve the function of criminal justice in Indian country. For example, BJA awards grant funding to tribes for the planning, construction, and renovation of correctional facilities. In fiscal year 2010, BJA awarded 25 grants to tribes totaling about $9 million to support tribal correctional facilities. Further, in fiscal year 2010, BJA awarded $220 million in grant funding provided through the Recovery Act for 20 construction and renovation projects at correctional facilities on tribal lands. Additionally, BJA administers the Tribal Courts Assistance Program—a grant program—which is intended to help federally recognized tribes develop and enhance the operation of tribal justice systems which may include activities such as training tribal justice staff, planning new or enhancing existing programs such as peacemaking circles and wellness courts and supporting alternative dispute resolution methods. In fiscal year 2010, BJA awarded 48 grants totaling $17 million to tribes to establish new or enhance existing tribal court functions. In its role as a policy and legal advisor regarding Indian country matters within DOJ, the Office of Tribal Justice facilitates coordination among DOJ components working on Indian issues. Additionally, the office functions as the primary point of contact for tribal governments. All 12 tribes we visited reported challenges that have made it difficult for them to adjudicate crime in Indian country including: (1) limitations on criminal jurisdiction and sentencing authority, (2) delays in receiving timely notification about the status of investigations and prosecutions from federal entities, (3) lack of adequate detention space for offenders convicted in tribal court, (4) perceived encroachment upon judicial independence by other branches of the tribal government, and (5) limited resources for day-to-day court operations. Various ongoing and planned federal efforts exist to help tribes effectively adjudicate crimes within their jurisdiction. For example, TLOA, which was enacted in July 2010, attempts to clarify roles and responsibilities, increase coordination and communication, and empower tribes with the authorities necessary to reduce the prevalence of crime in Indian country. Tribal courts only have jurisdiction to prosecute crimes committed by Indian offenders in Indian country, and their ability to effectively promote public safety and justice is curtailed by their limited sentencing authority and jurisdiction. As a result, even where tribal jurisdiction exists, tribes will often rely on the federal government to investigate and prosecute more serious offenses, such as homicide and felony-level assault, because a successful federal prosecution could result in a lengthier sentence and better ensure justice for victims of crime in Indian country. First, federal law limits the general sentencing authority of tribal courts to a maximum term of imprisonment not to exceed 1 year per offense. Officials from 6 of the 12 tribes we visited told us that the 1-year limit on prison sentences did not serve as an effective deterrent against criminal activity and may have contributed to the high levels of crime and repeat offenders in Indian country. Second, tribes do not have any jurisdiction to prosecute non- Indian criminal offenders in Indian country including those who commit crimes of domestic violence, assault, and murder. Therefore, tribes must rely on the USAO to prosecute non-Indian offenders. For example, in instances where a non-Indian abuses an Indian spouse, the tribe does not have the jurisdiction to prosecute the offender, and unless the USAO prosecutes the case, the non-Indian offender will not be prosecuted for the domestic violence offense. The rate at which non-Indians commit crime on the reservations we visited is unclear as the tribes were not able to provide related crime data. Officials from 6 of the tribes we visited noted that non-Indians may be more likely to commit crimes in Indian country because they are aware that tribes lack criminal jurisdiction over non-Indians and that their criminal activity may not draw the attention of federal prosecutors. For example, an official from a South Dakota tribe that we visited told us that the tribe has experienced problems with MS-13 and Mexican Mafia gangs who commit illegal activities such as distribution or sale of illegal drugs on the reservation because, as the official explained, they presume that federal prosecutors may be more inclined to focus their resources on higher-volume drug cases. Further, in 2006, the U.S. Attorney for the Wyoming district testified about a specific instance where a Mexican drug trafficker devised a business plan to sell methamphetamine at several Indian reservations in Nebraska, Wyoming, and South Dakota that first began with developing relationships with American Indian women on these reservations who would then help to recruit customers. According to a special agent involved in the case, the drug trafficker established drug trafficking operations to exploit jurisdictional loopholes believing that he could operate with impunity. According to a tribal justice official from a New Mexico pueblo, small-scale drug trafficking operations in Indian country can have an equally devastating effect on tribes as the effects of large-scale operations in large cities; therefore, if the federal government does not respond to small-scale operations in Indian country, the success of such operations may contribute to the sense of lawlessness in Indian country. When we asked tribes that we visited about how they decide to prosecute serious crimes over which they do have jurisdiction, 9 of the 12 tribes we visited noted that they may exercise concurrent jurisdiction and prosecute those crimes in tribal court. Some officials reported they would rather preserve their tribe’s limited resources, recognizing that sentences considered more commensurate with the crime may only result from federal prosecution. Nonetheless, 5 of the 12 tribes we visited in Arizona, New Mexico, North Dakota, and South Dakota perceive that the district USAOs decline to prosecute the majority of Indian country matters that are referred to them. Officials from the tribes we visited expressed concerns about the rate at which USAOs decline to prosecute Indian country crimes and noted that a high number of declinations sends a signal to crime victims and criminals that there is no justice or accountability. In December 2010, we reported that approximately 10,000 Indian country criminal matters were referred to USAOs from fiscal year 2005 through 2009. During that period, USAOs declined to prosecute 50 percent of the approximately 9,000 matters that they resolved, while they had not yet decided whether to prosecute or decline the remaining 1,000 matters. For criminal matters referred to USAOs, “weak or insufficient admissible evidence” followed by “no federal offense evident” were among the most frequently cited reasons associated with declinations based on available data in DOJ’s case management system, Legal Information Office Network System. Eight of the twelve tribes we visited stated that they rely on the federal government to investigate and prosecute serious crimes; however, officials from the tribes we visited reported that their tribe had experienced difficulties in obtaining information from federal entities about the status of criminal investigations. For example: Officials from 5 of the 12 tribes we visited told us that oftentimes they did not know whether criminal investigators—most commonly, BIA or FBI—had referred the criminal investigation to the USAO for prosecution. Officials from the tribes we visited expressed concern about the lack of timely notification from local USAOs about decisions to prosecute a criminal investigation. Tribal justice officials from 4 of the 12 tribes we visited noted that they have to initiate contact with their district USAOs to get information about criminal matters being considered for prosecution and that only upon request will the USAO provide verbal or written notification of the matters they decline to prosecute; however, little detail is provided about the reasons for the declination. We examined a declination letter that was sent to one of the tribes we visited and found that the letter stated that the matter was being referred back to the tribe for prosecution in tribal court, but no additional information was provided about the reason for the declination decision. The Chief Prosecutor from one of the pueblos we visited noted that it can be difficult for the USAO to share details about a criminal matter for fear that doing so may violate confidentiality agreements or impair prosecutors’ ability to successfully prosecute should the investigation be reopened at a later date. However, according to tribal officials, it is helpful to understand the reason for declining to prosecute a criminal matter so that tribal prosecutors can better determine whether to expend its resources to prosecute the matter in tribal court. Officials from 6 of the 12 tribes we visited told us that when criminal matters are declined, federal entities generally do not share evidence and other pertinent information that will allow the tribe to build its case for prosecution in tribal court. This can be especially challenging for prosecuting offenses such as sexual assault where DNA evidence collected cannot be replicated should the tribe conduct its own investigation following notification of a declination, according to officials. When the federal government decides not to pursue a prosecution, a tribe may decide to prosecute such a case provided that any tribal statute of limitations has not expired. Officials from 6 of the 12 tribes that we visited noted that it is not uncommon for the tribe to receive notification of USAO declination letters after the tribe’s statute of limitations has expired, which, ranges from 1 to 3 years. In addition to affecting the tribe’s ability to administer justice in a timely manner— that is, before the statute of limitations expires—officials also noted that the absence of investigation or declination information makes it difficult for tribal justice officials to successfully prosecute a criminal matter in tribal court and assure crime victims that every effort is being made to prosecute the offender. Officials from 6 of the 12 tribes we visited reported that they do not have adequate detention space to house offenders convicted in tribal courts and may face overcrowding at tribal detention facilities. Similarly, BIA and DOJ have acknowledged that detention space in Indian country is inadequate. One of the New Mexico pueblos we visited noted that the detention facility has a maximum capacity of 43 inmates; however, as of October 2010, there are more than 90 inmates imprisoned at the facility. In some instances, tribal courts are forced to make difficult decisions such as (1) foregoing sentencing a convicted offender to prison, (2) releasing inmates to make room for another offender who is considered to be a greater danger to the community, and (3) contracting with state or tribal detention facilities to house convicted offenders, which can be costly. According to an official from one of the New Mexico pueblos we visited, at times, when the pueblo has reached its detention capacity—up to three inmates—the pueblo has had to forego sentencing convicted juvenile or adult offenders to prison because using a nearby tribal facility to house its inmates would pose an economic hardship for the pueblo. Also, of the 12 tribes we visited, 5 noted that using detention facilities at another location is not always a viable option for housing offenders. Housing offenders in another entities’ detention facility can be costly for the tribe who has to pay to transport inmates between the tribal court of jurisdiction and detention facility for arraignments, trial, and other appearances. Generally, the tribes we visited have incorporated practices that help to foster and maintain judicial independence—that is, the ability of the tribal courts to function without any undue political or ideological influence from the tribal government. Various factors such as a tribe’s approach to removing judges and intervening on behalf of tribal members during an ongoing criminal matter could affect internal and external perceptions of a tribal court’s independence. The manner in which some tribes remove judges serves as an example of the tribe’s efforts to foster and maintain judicial independence. For example, at 11 of the 12 tribes we visited, a tribal judge can only be removed from office for cause following a majority vote by the Tribal Council. In another instance, the Chief Judge at one of the tribes we visited explained that tribal members will often approach the Tribal Council to intervene when members are not satisfied with the tribal court’s decision. The Tribal Council subsequently issued several reminders to tribal members that unsatisfied parties to a criminal matter can appeal the trial court’s decisions in the tribe’s appellate court. Decisions of this tribe’s appellate court; however, are final and not subject to review by the Tribal Council, thereby upholding and preserving the decisions and independence of the tribal court. The constitution for 4 of the 12 tribes we visited, stated that, upon appointment, judges’ salaries cannot be reduced while serving in office, thereby helping to protect the independence of the judiciary. Additionally, officials from the tribes we visited reported that certain activities may undermine a tribal court’s independence. For example, officials from 5 of the 12 tribes we visited noted that the tribal court is viewed as a tribal program by tribal members rather than as a separate and autonomous branch of government. For example, according to officials at one of the tribes we visited, the constitution was amended in 2008 to articulate the independence of the tribal court from the legislative and executive branches of the tribal government. However, according to the officials from this tribe, Tribal Council members continue to approach criminal court judges to inquire about the status of ongoing cases and Tribal Council members have intervened on behalf of tribal members to discuss reversing the court’s decisions on certain criminal matters. Such actions potentially add to the perception that the court is not autonomous and is subject to the rule of the executive or legislative branch, which, in turn can threaten the integrity of the tribal judiciary and create the perception of unfairness. Figure 3 shows a sign at a tribal court designed to serve as a measure to prevent people from engaging in ex parte communications. Additionally, the manner in which tribal governments distribute federal funding to tribal courts may limit courts’ control of their budgets. According to a BIA official and judges from one of the tribes we visited, the placement of the tribal court within the tribe’s overall budget structure—that is, not separate from other tribal programs that BIA funds—could contribute to the perception that the tribal court has little to no autonomy and separation from other tribal programs. Officials at the 12 tribes we visited told us they face various resource limitations resulting in reliance on federal funding, staffing shortages, and limited capacity to conduct jury trials. Tribes We Visited Reported They Rely on Federal Funding to Operate Tribal Courts Regardless of Their Size or Economic Condition. We found that all of the 12 tribes we visited rely fully or partially on federal funding to operate their court systems regardless of the size of the population the tribal court serves, its geographic location, or economic conditions. For example, one of the tribes we visited relies on federal funding for aspects of its court system even though federal funding generally accounts for less than 10 percent of the court system’s total budget, according to a senior tribal court official. This official explained that federal funding is barely sufficient to pay salaries for positions such as court clerks. Generally, of the 12 tribes we visited, the tribal government provided partial funding to 10 of the tribal courts; the remaining 2 were solely funded by federal funding. For further information about the funding levels for each of the 12 tribes we visited, see appendix III. Further, officials at 11 of the 12 tribes we visited noted that their tribal courts’ budgets are inadequate to properly carry out the duties of the court; therefore, the tribes often have to make tradeoffs, which may include not hiring key staff such as probation officers or providing key services such as alcohol treatment programs. According to BIA, historically, federal funding for tribal courts has been less than what tribes deemed necessary to meet the needs of their judicial systems. While tribal courts we visited collect a range of fees and fines, which can be an additional source of operating revenue, 6 of the 12 tribes noted that the fees and fines the court collects are to be returned to the tribal government’s general fund rather than retained for use by the tribal court. Where possible, to help fill the courts’ budget shortfalls, officials at 3 of the 12 tribes we visited told us that they have sought funding from other sources such as state grants or partnered with other tribal programs to provide treatment services for parties appearing before their courts. According to Tribes We Visited, Lack of Funding Affects Tribal Courts’ Ability to Maintain Adequate Staffing Levels and Provide Training to Court Personnel. Officials at 7 of the 12 tribes we visited told us that their tribal courts are understaffed and that funding is often insufficient to employ personnel in key positions such as public defenders, prosecutors, and probation officers, among other positions. Additionally, officials at three of the New Mexico pueblos we visited told us that law enforcement officers also served as prosecutors despite not being trained in the practice of law and not having sufficient training to serve as prosecutors. The Chief Judges at two of the New Mexico pueblos told us that the pueblos do not have any other alternatives due to the lack of funding. For further information about the staffing levels at each of the 12 tribes we visited, see appendix III. Tribal justice officials also stated that their tribal courts face various challenges in recruiting and retaining qualified judicial personnel including: (1) inability to pay competitive salaries, (2) housing shortages on the reservation, and (3) rural and remote geographic location of the reservation, among other things. For example, a tribal justice official from one of the South Dakota tribes we visited noted that the tribe is often forced to go outside its member population to hire judges and attorneys because tribal members often lack education beyond the eighth grade; however, the tribe often faces difficulties in paying competitive salaries to hire legally trained non-Indians who often command salaries that are higher than the tribe can afford. Additionally, tribal justice officials noted that while some tribal members do pursue higher education, they do not often always return to work in tribal communities, thereby creating a shortage in available talent to draw from within the tribe’s community. Further, officials from two of the tribes we visited noted that they may not be able to attract qualified applicants because of the rural location. Even if tribes overcome recruitment challenges, tribal justice officials noted that they may also face difficulties in retaining personnel—particularly, non-Indians—because these candidates’ marketability often increases after gaining experience in Indian country and they are able to pursue opportunities that meet their compensation and quality-of-life needs such as higher salaries and improved housing. Four of the twelve tribes we visited noted that the courts often use DOJ grant funds to pay salaries for various positions without the benefit of a sustainable funding source once the grant funds expire. For example, one of the South Dakota tribes we visited used grant funds to hire a compliance officer, probation officer, and process server to focus exclusively on domestic violence cases, which were occurring at a high rate on the reservation. Officials explained that they saw a decrease in reported cases of domestic violence during this time; however, once the grant funds expired, they were no longer able to maintain these positions and perceived an increase in domestic violence cases. Additionally, lack of funding hinders tribes’ abilities to provide personnel with training opportunities to obtain new or enhance existing skills. For example, at one of the North Dakota tribes we visited, court personnel explained that court clerks needed training to enhance their knowledge of scheduling court proceedings, developing case and records management systems, and familiarizing themselves with criminal procedures, among other things. Additionally, because of the increases in the number of cases involving illegal drugs, one of the judges we met with also expressed a need for training to effectively manage criminal proceedings that involve the use of methamphetamines. In particular, 8 of the 12 tribes we visited noted that they face difficulties in acquiring funds to register personnel for training as well as to pay for related expenses such as mileage reimbursement or other transportation costs, lodging, and per diem. The Chief Judge from one of the tribes we visited noted that the tribe has been able to acquire scholarships from various training providers to help absorb full or partial costs for certain training. Further, training providers such as the National Judicial College have begun to provide web-based training which, according to officials, is more cost-effective. Tribes We Visited Reported Having Limited Capacity to Conduct Jury Trials. Upon request, any defendant in tribal court accused of an offense punishable by imprisonment is entitled to a trial by jury of not less than six persons. However, officials from 7 of the 12 tribes we visited reported that their tribal courts have limited capacity to conduct jury trials due to limited courtroom space, funding, and transportation. For example, the courtroom for one of the New Mexico pueblos that we visited does not have adequate space to seat a six-person jury and, according to officials, there is not another facility that can be used to set up a jury box. Additionally, tribal officials at 2 of the 12 tribes we visited stated that their courts lack funding to pay tribal members a per diem for jury duty. Additionally, potential jurors’ lack of access to personal or public transportation can hinder the courts’ ability to seat a jury. For example, officials from two of the Arizona tribes we visited explained that there is no public transportation on the reservations, and consequently it is difficult for tribal members without access to personal transportation to travel to court. Various federal efforts exist that could help to address some of the challenges that tribes face in effectively adjudicating crime in Indian country. For example, TLOA: (1) authorizes tribal courts to impose a term of imprisonment on certain convicted defendant in excess of 1 year; (2) authorizes and encourages USAOs to appoint Special Assistant U.S. Attorneys (SAUSA), including the appointment of tribal prosecutors to assist in prosecuting federal offenses committed in Indian country; (3) requires that federal entities coordinate with appropriate tribal law enforcement and justice officials on the status of criminal investigations terminated without referral or declined prosecution; and (4) requires BOP to establish a pilot program to house, in federal prison, Indian offenders convicted of a violent crime in tribal court and sentenced to 2 or more years imprisonment. Additionally, to help address issues regarding judicial independence, BIA has ongoing and planned training to help increase tribes’ awareness about the significance of judicial independence. Many of these initiatives directly resulted from the enactment of TLOA in July 2010; and at this time, these initiatives are in the early stages of implementation. As a result, it is too early to tell the extent to which these initiatives are helping to address the challenges that tribes face in effectively adjudicating crime in Indian country. Various federal efforts are underway that provide additional resources to assist tribes in the investigation and prosecution of crime in Indian country including (1) additional federal prosecutors, (2) authorizing tribal courts to impose longer prison sentences on certain convicted defendants, (3) mandating changes to the program that authorizes BIA to enter into agreements to aid in law enforcement in Indian country, and (4) affording tribal prosecutors opportunities to become Special Assistant U.S. Attorneys to assist in prosecuting federal offenses committed in Indian country. First, to help address the high levels of violent crime in Indian country, in May 2010, DOJ announced the addition of 30 Assistant U.S. Attorneys (AUSA) to serve as tribal liaisons in 21 USAO district offices that contain Indian country including the four states that we visited as part of our work—Arizona, New Mexico, North Dakota, and South Dakota. According to DOJ, these additional resources will help the department work with its tribal law enforcement partners to improve public safety in Indian country. DOJ also allocated 3 additional AUSAs to help support its Community Prosecution Pilot Project which it launched at two of the tribes we visited—the portion of Navajo Nation within New Mexico and the Oglala Sioux Tribe in South Dakota. Under this pilot project, the AUSAs will be assigned to work at their designated reservation on a regular basis and will work in collaboration with the tribe to develop strategies that are tailored to meet the public safety challenges facing the tribe. Second, TLOA authorizes tribal courts to imprison convicted offenders for up to a maximum of 3 years if the defendant has been previously convicted of the same or a comparable crime in any jurisdiction (including tribal) within the United States or is being prosecuted for an offense comparable to an offense that would be punishable by more than 1 year if prosecuted in state or federal court. To impose an enhanced sentence, the defendant must be afforded the right to effective assistance of counsel and, if indigent, the assistance of a licensed attorney at the tribe’s expense; a licensed judge with sufficient legal training must preside over the proceeding; prior to charging the defendant, the tribal government criminal laws and rules of evidence and criminal procedure must be made publicly available; and the tribal court must maintain a record of the criminal proceedings. Generally, tribal justice officials from 9 of the 12 the tribes we visited stated that they welcome the new sentencing authority, but officials from 2 of the tribes noted that they would likely use the new authority on a case-by-case basis because they lacked the infrastructure to fully meet the requisite conditions. For example, the Chief Judge from one of the New Mexico pueblos we visited noted that rather than hiring a full-time public defender, the pueblo is considering hiring an attorney on contract to be used on a case-by-case basis when the enhanced sentencing authority may be exercised. Third, TLOA mandates changes to the Special Law Enforcement Commission (SLEC) program which authorizes BIA to enter into agreements for the use of personnel or facilities of federal, tribal, state, or other government agencies to aid in the enforcement of federal or, with the tribe’s consent, tribal law in Indian country. Specifically, within 180 days of enactment, the Secretary of the Interior shall develop a plan to enhance the certification and provision of special law enforcement commissions to tribal law enforcement officials, among others, that includes regional training sessions held at least biannually in Indian country to educate and certify candidates for the SLEC. The Secretary of the Interior, in consultation with tribes and tribal law enforcement agencies, must also develop minimum requirements to be included in SLEC agreements. Under the SLEC program, administered by the BIA, tribal police may be deputized as federal law enforcement officers, which affords them the authorities and protections available to federal law enforcement officers. According to BIA, given the potential difficulties arresting officers face in determining whether a victim or offender is an Indian or not or whether the alleged crime has occurred in Indian country (for purposes of determining jurisdiction at the time of arrest) a tribal officer deputized to enforce federal law is not charged with determining the appropriate jurisdiction for filing charges; rather this is to be determined by the prosecutor or court to which the arresting officer delivers the offender. Lastly, among other provisions, TLOA explicitly authorizes and encourages the appointment of qualified attorneys, including tribal prosecutors, as Special Assistant U.S. Attorneys (SAUSA) to assist in the prosecution of federal offenses and administration of justice in Indian country. If appointed as a SAUSA, a tribal prosecutor may pursue in federal court an Indian country criminal matter with federal jurisdiction that, if successful, could result in the convicted defendant receiving a sentence greater than if the matter had been prosecuted in tribal court. According to the Associate Attorney General, many tribal prosecutors have valuable experience and expertise that DOJ can draw on to prosecute crime and enforce federal criminal law in Indian country. Further, tribal prosecutors at 4 of the 12 tribes we visited are in varying stages of obtaining SAUSA credentials. The Chief Prosecutor at a New Mexico pueblo who is in the process of obtaining a SAUSA credential cited various benefits arising from a SAUSA appointment including increased: (1) prosecution of criminal cases that involve domestic violence and child sexual abuse; (2) prosecution of misdemeanor-level offenses committed by non-Indians against Indians that occur in Indian country; (3) ability to directly present criminal investigations to the district USAO rather than solely relying on BIA criminal investigators to do so; and (4) cooperation from tribal crime victims and witnesses who may be more forthcoming with someone closely affiliated with the pueblo rather than federal investigators or prosecutors, thereby helping to facilitate a more successful investigation and prosecution of a federal crime. TLOA provides that federal investigators and prosecutors must coordinate with tribes to communicate the status of investigations and prosecutions relating to alleged criminal offenses in Indian country crimes. More specifically, if a federal entity terminates an investigation, or if a USAO declines to prosecute or terminates a prosecution of an alleged violation of federal criminal law in Indian country, they must coordinate with the appropriate tribal officials regarding the status of the investigation and the use of evidence relevant to the case in a tribal court with authority over the crime alleged. Individually and collectively, these requirements could better enable tribes to prosecute criminal matters in tribal court within their statute of limitations. Although TLOA does not prescribe how coordination is to occur between federal entities—such as FBI and BIA criminal investigators—and tribes, DOJ directed relevant USAOs to work with tribes to establish protocols for coordinating with tribes. For example, the USAO for the District of Arizona, in consultation with Arizona tribes, has established protocols to guide its coordination with tribes. Specifically, within 30 days of a referral of a criminal investigation for prosecution, the Arizona district USAO plans to notify the relevant tribe in writing if the office is declining to prosecute the matter. Officials from one of the New Mexico pueblos we visited explained that they would like to have an entrance conference with the USAO for the District of New Mexico on each criminal investigation that is referred to the USAO for which the tribe has concurrent jurisdiction and an exit conference to discuss the USAO reasons for declining to prosecute the crime. Tribal officials explained that the exit conference could serve to educate the tribe about what it can do to better prepare an investigation for referral to the USAO. According to DOJ, each USAO and FBI field office will make efforts to reach agreements with tribes in their jurisdiction about communicating the status of investigation and prosecutions based on the unique needs of the tribe. Pursuant to TLOA, on November 26, 2010, the Bureau of Prisons (BOP) launched a 4-year pilot program to house at the federal government’s expense up to 100 Indian offenders convicted of violent crimes in tribal courts and sentenced to terms of imprisonment of 2 or more years. DOJ considers the pilot program to be an important step in addressing violent offenders and underresourced correctional facilities in Indian country. BOP’s goal is to reduce future criminal activity of Indian offenders by providing them with access to a range of programs such as vocational training and substance abuse treatment programs that are designed to help offenders successfully reenter their communities following release from prison. It is unlikely that 5 of the 12 tribes we visited will immediately begin participating in the pilot because they are not yet positioned to fully meet the conditions that are required to imprison Indian offenders convicted in tribal court for two or more years. Additionally tribal officials expressed concern about placing convicted Indian offenders in federal prison because tribal members would likely oppose having tribal members sent to locations that are not in close proximity to the reservation, making it difficult for family members to visit and ensure the convicted Indian offender is able to maintain a connection with the tribal community—a key aspect of tribes’ culture and values. While tribes expressed concern about the placement of tribal members in federal prison, officials from 2 of the tribes we visited stated that access to federal programs such as substance abuse and mental health treatment programs and job training would be a major benefit that offenders would likely not have access to while imprisoned in tribal detention facilities. More broadly, TLOA requires that BIA, in coordination with DOJ and in consultation with tribal leaders, law enforcement and correctional officers, submit a long-term plan to address incarceration in Indian country to Congress by July 29, 2011. The long-term plan should also describe proposed activities for constructing, operating, and maintaining juvenile and adult detention facilities in Indian country and construction of federal detention facilities in Indian country, contracting with state and local detention centers upon the tribe’s approval, and alternatives to incarceration developed in cooperation with tribal court systems. BIA and DOJ officials noted that they have begun to conduct consultations with tribal entities to address incarceration in Indian country. BIA has taken steps to help increase awareness about the importance and significance of judicial independence in tribal communities. For example, officials from one of the tribes we visited told us that, at the request of the tribal court, the BIA Superintendent is to conduct a workshop for tribal leaders and community members to, among other things, provide instruction on how interference with the tribal court’s decisions can threaten the judiciary’s ability to provide equitable adjudication of crimes. Further, BIA’s Division of Tribal Justice Support for Courts has conducted similar workshops in the past and expects to do so again in fiscal year 2011. According to BIA and DOJ officials, the two agencies have begun to establish interagency coordinating bodies intended to facilitate the agencies’ efforts to coordinate on tribal court and detention initiatives. Officials noted that because Indian country issues are a top priority across the federal government, federal departments and agencies are focused on ensuring that, where appropriate, they work together to address the needs of Indian tribes. For example, when DOI and DOJ developed tribal consultation plans for their respective agencies in 2010, the two agencies cited interagency coordination as a key element to meeting the tribes’ needs. According to DOJ, interagency coordination is essential to holding stakeholders accountable and achieving success. Similarly, DOI acknowledged the importance of collaborating and coordinating with its federal partners regarding issues that affect tribes. BIA and DOJ officials told us that communication between the two agencies has increased and their staff now know whom to call about various tribal justice issues, which they commented is a significant improvement over prior years when there was little to no communication. For example, DOJ has begun to consult BIA about its future plans to fund the construction of tribal correctional facilities, which has helped to resolve past inefficiencies. BIA officials told us that they need to know which tribes DOJ plans to award grants to construct correctional facilities at least 2 years in advance so that they can plan their budget and operational plans accordingly in order to fulfill their obligation to staff, operate, and maintain detention facilities. According to BIA, there have been instances where they were unaware of DOJ’s plans to award grant funds to tribes to construct tribal detention facilities, which could result in new facilities remaining vacant until BIA is able to secure funding to operate the facility. DOJ has implemented a process whereby when tribes apply for DOJ grants to construct correctional facilities, DOJ consults BIA about each applicant’s needs as BIA typically has firsthand knowledge about tribes’ needs for a correctional facility and whether the tribe has the infrastructure to support a correctional facility, among other things. BIA then prioritizes the list of applicants based on its knowledge of the detention needs of the tribes. DOJ officials noted that the decision about which tribes to award grants to rests solely with them; however, they do weigh BIA’s input about the tribes’ needs for and capacity to utilize a correctional facility when making grant award decisions. To help BIA anticipate future operations and maintenance costs for new tribal correctional facilities, each year DOJ’s Bureau of Justice Assistance (BJA) provides BIA with a list of planned correctional facilities that includes the site location, size, and completion date. BIA officials noted that this level of coordination with DOJ is an improvement over past years as it helps to facilitate planning and ensure they are prepared to assume responsibility to staff, operate, and maintain tribal detention facilities. BIA and BJA also serve on a governmentwide coordinating body, the Planning Alternatives and Correctional Institutions for Indian Country Advisory Committee, which brings together federal stakeholders who play a role in planning detention and correctional programs and facilities in Indian country. The advisory committee is responsible for developing strategic approaches to plan the training and technical assistance that BJA provides to tribes that receive grant funding to construct or renovate juvenile and adult correctional facilities. Specifically, among other things, the agencies work together to plan the training and technical assistance to be delivered to tribes on issues such as alternatives to help control and prevent jail overcrowding, controlling costs to develop and operate detention facilities, developing alternatives to incarceration, and implementing substance abuse and mental health treatment programs at correctional facilities. According to DOJ officials, the advisory committee helps to provide a coordinated federal response that leverages the full scope of agency resources needed to deliver services that meet the tribes’ needs. BIA and DOJ officials have committed to working together to help meet the two agencies’ shared goal to improve the criminal justice crisis in Indian country. To that end, in 2009, DOI, through BIA, and DOJ established both department level and program level coordinating bodies to increase communication and information exchange between the two agencies. At the department level, the Deputy Attorney General and the Deputy Secretary of the Interior jointly chair a working group that meets quarterly to facilitate governmentwide policymaking on tribal justice issues and coordinate agency activities on a range of tribal justice issues that are designed to help BIA and DOJ achieve their individual and shared goal of improving public safety in Indian country. For example, the working group is to oversee BIA and DOJ’s efforts to assess tribal correctional and tribal court systems’ needs and to develop strategies such as prisoner reentry programs in Indian country. In addition, the working group will oversee the implementation of various provisions included in TLOA such as assessing the effectiveness of the enhanced sentencing authority that tribal courts may exercise. At the program level, in 2009, BIA and DOJ established task forces to address key issues including tribal judicial systems and tribal detention, among other issues. The task forces that report to the department level working group are chaired by senior officials from BIA and DOJ and serve as a forum for BIA and DOJ to, where appropriate, jointly address a range of public safety and justice issues in Indian country. For example, as part of the detention task force, BIA and DOJ officials are now working together, in consultation with tribes, to identify alternatives to incarceration in Indian country. According to BIA and DOJ officials, the task force’s activities are to, among other things, support the activities of the department-level working group. For example, the work conducted by the task forces is intended to help facilitate the two agencies’ efforts to develop a long-term plan for submission to Congress in July 2011 that includes proposals on how to address juvenile and adult detention facilities. Although BIA and DOJ have taken action to coordinate their activities, according to officials the agencies’ coordination efforts are in the early stages of development and it is too early to gauge how effective these efforts will be based on six of the eight practices that we have identified for ensuring that collaborating agencies conduct their work in a coordinated manner. We found that the two agencies have defined a common outcome—improving public safety and justice in Indian country—which is one of the eight practices that we have identified for enhancing and maintaining effective collaboration among federal agencies. In our previous work we have reported that it is a good practice for agencies to have a clearly defined outcome, as doing so can help align specific goals across agencies and help overcome differences in agency missions, cultures, and established ways of doing business. Officials told us that as they work toward defining approaches to achieve their common goal there could be a need to take a more strategic approach that incorporates the key collaboration practices that we have identified to help achieve sustainable interagency coordination. To that end, BIA officials told us that in January 2011, they expect to deploy a liaison to DOJ’s Office of Tribal Justice to help foster ongoing sustainable collaboration between the two agencies. The BIA liaison is to work with staff from various DOJ components as the two agencies develop and execute coordinated plans to implement various provisions in TLOA regarding tribal detention and tribal courts, among other tribal justice initiatives. To meet their respective responsibilities to support tribal courts, BIA and DOJ provide funding, training, and technical assistance to tribal courts; however, the two agencies do not leverage each other’s resources—one of the eight collaboration practices that we have identified—by sharing certain relevant information that could benefit each agency’s efforts to enhance the capacity of tribal courts to effectively administer justice in Indian country. In October 2009, DOJ told the leadership of the Senate Indian Affairs Committee that it was taking action to provide better coordination with DOI to ensure that the two agencies’ tribal courts initiatives are coordinated to develop and support tribal courts to help tribal courts build the capacity needed to exercise the enhanced sentencing authority proposed for tribes under TLOA. However, when we met with OJP and BIA program officials in October 2010 and November 2010, respectively, they noted that the information sharing and coordination mechanisms that are in place to support tribal detention initiatives have not extended to tribal courts initiatives. For example: Since 2005, BIA has commissioned reviews of about 90 tribal court systems that include the collection of data such as court funding and operating budget, training needs for court clerks and judges, and technical assistance needs such as developing and maintaining a complete collection of a tribal criminal code. DOJ officials told us that they were vaguely aware of these court reviews but stated they had never seen the reviews or the accompanying corrective action plans. BIA officials told us that DOJ had never requested the court reviews or corrective action plans and that they had never shared this information with DOJ. BIA officials stated that they were aware that DOJ awards competitive grants to tribal courts; however, DOJ does not share information with BIA about which tribal courts have applied for DOJ grants to establish new or enhance existing tribal court systems. BIA officials noted that DOJ could benefit from BIA’s insights and firsthand knowledge about the needs of tribal courts including those tribal courts that BIA has identified as having the greatest need for additional funding. Further, BIA officials noted that they were unaware of the training and technical assistance that DOJ provides to tribal courts and noted that there could be potential unnecessary duplication with the training and technical assistance that both agencies provide as well as inefficient use of scarce resources. For example, according to BIA, there was an instance where DOJ and BIA provided funding to a tribe to purchase the hardware and software for a case management system, but neither DOJ nor BIA consulted each other about the purchase. Ultimately, the tribe did not have any funds to purchase software training and, as a result never used the system. Sharing information about training and technical assistance could help ensure that BIA and DOJ avoid such situations. DOJ officials stated that they frequently hear concerns from tribes that tribal courts lack the funds needed to operate effectively; however, DOJ does not have direct access to information about the funding that BIA provides to tribal courts. According to DOJ officials, gaining access to BIA’s annual funding data could be useful in DOJ’s efforts to implement a more strategic approach to meet the needs of tribal courts. Specifically, officials told us that data on the annual funding to tribal courts could help DOJ to first establish a baseline, then conduct a needs assessment to identify overall needs and then use that information to identify what additional funding, if any, is needed to close the gap between the baseline and overall resource need. We have previously reported that collaborating agencies are most effective when they look for opportunities to leverage each other’s resources, thereby obtaining benefits that may not otherwise be available if the agencies work separately. Further, Standards for Internal Control in the Federal Government call for agencies to enhance their effectiveness by obtaining information from external stakeholders that may have a significant impact on the agency achieving its goals. Developing mechanisms for identifying and sharing information and resources related to tribal courts could yield potential benefits in terms of leveraging efforts already underway and minimizing the potential for unnecessary duplication in federal agencies’ efforts to support tribal courts. Moreover, by sharing information resources, BIA and DOJ could achieve additional benefits that result from the different levels of expertise and capacities that each agency brings. BIA and DOJ officials acknowledged that the two agencies could benefit from working together to share information and leverage resources to address the needs of tribal courts and stated that they would begin taking steps to do so. Because responsibilities for enhancing the capacity of tribal courts is shared among two key federal agencies—DOI and DOJ—effective collaboration is important to operating efficiently and effectively and to producing a greater public benefit than if the agencies acted alone. Although the two agencies have information regarding tribal courts that could be of benefit to the other, they have not fully shared their information with each other. As a result, they have missed opportunities to share information that could be used to better inform decisions about funding and development of training and technical assistance that meets the tribes’ needs. Developing mechanisms for better sharing information about tribal courts could help the agencies ensure they are targeting limited federal funds to effectively and efficiently meet the needs of federally recognized tribes. To maximize the efficiency and effectiveness of each agency’s efforts to support tribal courts by increasing interagency coordination and improving information sharing, we recommend that the Attorney General and the Secretary of the Interior direct DOJ’s Office of Justice Programs and BIA’s Office of Justice Services, respectively, to work together to develop mechanisms, using GAO collaboration practices as a guide, to identify and share information and resources related to tribal courts. We provided a draft of this report to DOI and DOJ for review and comment. The DOI audit liaison stated in an e-mail response received on January 25, 2011, that DOI agreed with the report’s findings and concurred with our recommendation; however, DOI did not provide written comments to include in our report. DOJ provide written comments that are reproduced in appendix IV. DOJ concurred with our recommendation and noted that OJP’s Bureau of Justice Assistance has begun discussions with BIA’s Office of Justice Services about plans to, among other things, coordinate training activities and share funding information regarding tribal courts. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Attorney General of the United States, the Secretary of the Interior, and appropriate congressional committees. This report will also be available at no charge on our website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-9627 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. We were asked to review the challenges facing selected tribal justice systems as well as federal agencies’ efforts to coordinate their activities to support tribal justice systems. Specifically, we prepared this report to answer the following questions: 1. What challenges do tribes face in adjudicating Indian country crimes and what federal efforts exist to help address those challenges? 2. To what extent have the Department of the Interior (DOI) and Department of Justice (DOJ) components collaborated with each other to support tribal justice systems? To identify the challenges facing tribes in adjudicating criminal matters in Indian country and what federal efforts exist to help address those challenges, we met with tribal justice officials such as judges, prosecutors, law enforcement officers, and court administrators from a nonprobability sample of 12 federally recognized tribes in Arizona, New Mexico, North Dakota, and South Dakota. We selected the tribes based on several considerations. First, we identified the U.S. Attorney district offices that received the largest volume of Indian country criminal matters from fiscal years 2004 through 2008, the five most recent years of available data at the time we conducted our selection. We interviewed DOJ officials about the data-entry process, performed electronic testing for obvious errors in accuracy and completeness of the data, and reviewed database documentation to determine that the data were sufficiently reliable for the purpose of our review. Next, we considered a variety of factors including (1) reservation land size, (2) population, (3) types of tribal court structures, (4) number and type of courts, and (5) number of full-time judicial personnel such as judges and prosecutors. The selected tribes have a range of land and population size, court size, and tribal court structures such as traditional and modern court systems. We also obtained documentation on the tribal courts’ operations, caseload, and funding. Because we are providing the caseload and funding data for informational purposes only, we did not assess the reliability of the data we obtained from the tribes. Additionally, we obtained the tribe’s perspectives on the federal process to communicate declination decisions. In light of the public safety and justice issues underlying the requests for this work and the focus in the Tribal Law and Order Act of 2010 (TLOA) on criminal matters, we focused on criminal rather than civil law matters during the course of this review. While the results of these interviews cannot be generalized to reflect the views of all federally recognized tribes across the United States, the information obtained provided us with useful information on the perspectives of various tribes about the challenges they face in adjudicating criminal matters. Additionally, we identified federal efforts to help support tribal efforts to adjudicate criminal matters in Indian country based on new or amended statutory provisions enacted through TLOA. We also interviewed cognizant officials from the Bureau of Indian Affairs and various DOJ components such as the Federal Bureau of Investigation, the Executive Office of U.S. Attorneys, and select U.S. Attorneys Offices to obtain information about their efforts to implement TLOA provisions to help address the challenges facing tribes in administering justice in Indian country. To determine the extent that DOI and DOJ collaborate with each other to support public safety and justice in tribal communities, we first compared the agencies’ efforts against criteria in Standards for Internal Control in the Federal Government which holds that agencies are to share information with external stakeholders that can affect the organization’s ability to achieve its goals. Next, we identified practices that our previous work indicated can enhance and sustain collaboration among federal agencies and assessed whether DOI and DOJ’s interagency coordination efforts reflected consideration of those practices. For purposes of this report, we define collaboration as any joint activity by two or more organizations that is intended to produce more public value than could be produced when the organizations act alone. We use the term “collaboration” broadly to include interagency activities that others have defined as cooperation, coordination, integration, or networking. Eight practices we identified to enhance and sustain collaboration are as follows: (1) define and articulate a common goal; (2) establish mutually reinforcing or joint strategies to achieve that goal; (3) identify and address needs by leveraging resources; (4) agree on roles and responsibilities; (5) establish compatible policies, procedures, and other means to operate (6) develop mechanisms to monitor, evaluate, and report on results; (7) reinforce agency accountability for collaborative efforts through agency plans and reports; and (8) reinforce individual accountability for collaborative efforts through performance management systems. In this report, we focused on two of the eight practices—defining and articulating a common goal and identifying and addressing needs by leveraging resources—that we previously identified for enhancing and maintaining effective collaboration among federal agencies. We were not able to address the remaining six practices because we found that DOI and DOJ were in the early stages of implementing these two practices that serve as the foundation for the remaining practices. For example, because collaboration activities are in the early stages of development and the agencies have not yet established joint strategies to achieve the goal of enhancing the capacity of tribal courts, we did not expect the agencies to have developed mechanisms to monitor and report on the results of their collaboration, reinforce accountability by preparing reports, or establish performance management systems. We selected examples that, in our best judgment, clearly illustrated and strongly supported the need for improvement in specific areas where the key practices could be implemented. We met with officials from DOI and various DOJ components such as the Office of Tribal Justice and Office of Justice Programs to discuss the mechanisms they have put in place to enhance and sustain collaboration between the two agencies. We conducted this performance audit from September 2009 through February 2011 in accordance with generally accepted auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence provides a reasonable basis for our findings and conclusions based on our audit objectives. The exercise of criminal jurisdiction in Indian country depends on several factors, including the nature of the crime, the status of the alleged offender and victim (that is, whether they are Indian or not) and whether jurisdiction has been conferred on a particular entity by, for example, federal treaty or statute. As a general principle, the federal government recognizes Indian tribes as “distinct, independent political communities” with inherent powers of self-government to regulate their “internal and social relations,” which includes enacting substantive law over internal matters and enforcing that law in their own forums. The federal government, however has plenary and exclusive authority to regulate or modify the powers of self-government the tribes otherwise possess, and has exercised this authority to establish an intricate web of jurisdiction over crime in Indian country. Enacted in 1817, the General Crimes Act (also referred to as the Federal Enclaves Act or Indian Country Crimes Act), as amended, established federal criminal jurisdiction in Indian country over cases where either the alleged offender or the victim is Indian. It did not, for example, establish federal jurisdiction over cases where both parties are Indian and, in effect, left jurisdiction over cases where both parties are non-Indian to the state. Enacted in 1885, the Major Crimes Act extended federal criminal jurisdiction in Indian country to Indians who committed so-called “major crimes,” regardless of the victim’s status. As amended, the Major Crimes Act provides the federal government with criminal jurisdiction over Indians charged with felony-level offenses enumerated in the statute. The tribes retained exclusive jurisdiction over other criminal offenses (generally, misdemeanor-level) where both parties are Indian. State governments, however, may not exercise criminal jurisdiction over Indians or their property in Indian country absent a “clear and unequivocal grant of that authority” by the federal treaty or statute. Enacted in 1953, Public Law 280 represents one example of a “clear and unequivocal” grant of state criminal jurisdiction. As amended, Public Law 280 confers exclusive criminal jurisdiction over offenses committed in Indian country to the governments of six states—Alaska, California, Minnesota, Nebraska, Oregon, and Wisconsin, except as specified by statute, thereby waiving federal jurisdiction under the General and Major Crimes acts in these states and subjecting Indians to prosecution in state court. Subsequent amendments to Public Law 280 and other laws further define state criminal jurisdiction in Indian country. To summarize the foregoing discussion, the exercise of criminal jurisdiction by state governments in Indian country is generally limited to two instances, both predicated on the offense occurring within the borders of the state—where both the alleged offender and victim are non-Indian, or where a federal treaty or statute confers, or authorizes a state to assume, criminal jurisdiction over Indians in Indian country. Otherwise, jurisdiction is distributed between federal and tribal governments. Where both parties to the crime are Indian, the tribe generally has exclusive jurisdiction for misdemeanor-level offenses, but its jurisdiction runs concurrent with the federal government for felony-level offenses. Where the alleged offender is Indian but the victim is non-Indian, tribal and federal jurisdiction is generally concurrent. Finally, federal jurisdiction is exclusive where the alleged offender is non-Indian and the victim is Indian. When a tribal government exercises its jurisdiction to prosecute an Indian offender, it must do so in accordance with the Indian Civil Rights Act (ICRA). Enacted in 1968, ICRA limited the extent to which tribes may exercise their powers of self-government by imposing conditions on tribal governments similar to those found in the Bill of Rights to the U.S. Constitution. For example, the act extended the protections of free speech, free exercise of religion, and due process and equal protection under tribal laws. With respect to alleged criminal conduct, tribes are prohibited from trying a person twice for the same offense (double jeopardy), compelling an accused to testify against himself or herself in a criminal case, and imposing excessive fines or inflicting cruel and unusual punishment. Tribes must also afford a defendant the rights to a speedy and public trial, to be informed of the nature and cause of the accusation, to be confronted by witnesses of the prosecution, to have compulsory process for witnesses in his favor, and to be represented by counsel at his own expense, among other things. ICRA also governs the sentencing authority tribes exercise over convicted Indian offenders. First, any person accused of an offense punishable by imprisonment has the right, upon request, to a trial by jury of not less than six persons. Second, the act limits the maximum sentence a tribe may impose. Prior to amendments made by the Tribal Law and Order Act (TLOA) in July 2010, ICRA limited the maximum sentence for any one offense to a term of 1 year imprisonment, a $5,000 fine, or both, regardless of the severity of the alleged offense. The July 2010 amendments, however, authorize tribal courts to impose sentences in excess of 1 year imprisonment or $5,000 fine if the tribe affords the defendant certain additional protections specified in the statute. Specifically, a tribal court may subject a defendant to a maximum term of imprisonment of 3 years (or a fine not to exceed $15,000, or both) for any one offense if the defendant had been previously convicted of the same or a comparable offense by any jurisdiction in the United States, or the defendant was prosecuted for an offense comparable to one punishable by more than 1 year of imprisonment if prosecuted by the United States or any of the states. To exercise this enhanced sentencing authority, the tribe must afford a criminal defendant the following additional protections: effective assistance of counsel; if indigent, the assistance of a licensed defense attorney appointed at the tribe’s expense; a presiding judge with sufficient legal training and a license to practice law; prior to charging the defendant, make publicly available the criminal laws and rules of evidence and criminal procedure of the tribal government; and maintain a record (audio or otherwise) of the criminal proceeding. Finally, although ICRA protects alleged offenders from double jeopardy in tribal courts, neither the federal government nor the tribal government is precluded from pursuing a prosecution if the other sovereign elects to prosecute the case. Therefore, by example, a criminal defendant prosecuted in tribal court may still face prosecution, and a potentially more severe sentence if convicted, in federal court. This appendix summarizes information regarding the court systems of the 12 tribes we visited in Arizona, New Mexico, North Dakota, and South Dakota. Specifically, in Arizona, we visited Gila River Indian Community, Navajo Nation, and Tohono O’odham Nation. New Mexico tribes we covered include the Pueblos of Isleta, Laguna, Pojoaque, and Taos. In North Dakota, we included Standing Rock Sioux and Three Affiliated Tribes. Lastly, the South Dakota tribes we visited include Cheyenne River Sioux, Oglala Sioux, and Rosebud Sioux tribes. The 12 tribes that we visited ranged in enrollment from 417 members to nearly 300,000 tribal members. Tribal enrollment data showed that for 9 of the 12 tribes we visited, more than 50 percent of the enrolled members live on the reservation. Enclosed in this appendix are individual summaries for each tribe that include a description of: (1) land area and population data, (2) establishment of the court system, (3) availability of tribal code and court rules and procedures, (4) structure of the court system, (5) selection and removal of judges as well as requisite qualifications, (6) judicial personnel and court staff, (7) caseload levels, and (8) funding information. The Cheyenne River Indian Reservation of the Cheyenne River Sioux Tribe covers 4,410 square miles in north-central South Dakota, as shown in figure 4, and is between Delaware and Connecticut in size. Of the estimated 16,622 enrolled members of the tribe, an estimated 8,000 live on the reservation. The Cheyenne River Sioux Tribe’s constitution, which was adopted in 1935, assigned the duty of establishing a court to the Tribal Council. The court system was established in the late 1930s. Tribal officials stated that the tribe’s judiciary is a separate branch of government. Further, a 1992 amendment to the constitution stated that decisions of tribal courts shall not be subject to review by the Tribal Council. Officials noted that the Judiciary and Codification Committee of the Tribal Council and the Chief Judge, among others, oversee the operations of the tribal court. The Cheyenne River Sioux Tribe’s Law and Order Code, established in 1978, has been amended a number of times and is available in electronic format, according to officials. The Chief Judge reported that the Law and Order Code is modeled after South Dakota laws. The Tribal Council’s Judiciary and Codification Committee is responsible for updating the criminal code. Additionally, members of the tribal court and the tribe’s legal department also assist the Committee in updating the code. According to officials, the tribe follows federal rules of evidence and has adopted rules of criminal and civil procedure as well as a Code of Judicial Conduct that are modeled after federal and state courts. The Cheyenne River Sioux Tribe’s court system is composed of a tribal court, a juvenile court, a mediation court, and an appellate court. Tribal officials consider the court system to be modern, though the mediation court incorporates some traditional practices that promote tribal traditions and values to resolve disputes. In 1992, according to tribal officials, the tribe’s constitution was amended to include a provision that states that decisions of the tribal court may be appealed to the tribe’s appellate court, but shall not be subject to review by the Tribal Council. Tribal judges are elected by voting members of the tribe and must (1) be a member of the Cheyenne River Sioux Tribe, (2) have resided on the reservation for 1 year preceding the election, and (3) be over 25 years of age. We were not able to obtain complete information about the required qualifications for judges and the tribe’s process to select and remove judges. Information about judicial personnel and court staff are not reported as we were not able to obtain complete information from the tribe. Data about the court’s caseload for fiscal years 2008 through 2010 are not included as we were not able to obtain complete information from the tribe. BIA reported that for fiscal years 2008 and 2009, it did not distribute any funding to Cheyenne River Sioux Tribe specifically for tribal court programs. In fiscal year 2010, BIA distributed $190,503 to the tribe, but we were not able to obtain information from the tribe on how much funding was allocated to tribal court programs. Further, DOJ did not award any grant funding to Cheyenne River Sioux Tribe as part of its Tribal Court Assistance Program (TCAP) for fiscal years 2008 through 2010. The Gila River Indian Reservation covers 584 square miles in Arizona, and is between the District of Columbia and Rhode Island in size. Of the estimated 20,590 enrolled members of the tribe, approximately 82 percent, or 16,783, live on the reservation. The Gila River Indian Community’s constitution, adopted in 1960, authorized but did not establish a court system or articulate its jurisdiction or powers, leaving this to the Tribal Council. Although the council exercised its authority to establish a court system, there is no formal document marking when this occurred. The tribe has efforts underway to adopt a revised constitution, which seeks to establish a separate judicial branch that is autonomous and independent of other branches of the tribal government. The draft constitution calls for a court system that is comprised of a tribal court known as the Community Court, Supreme Court, and other lower courts, including forums for traditional dispute resolution, as deemed necessary by the legislature. Gila River Indian Community has civil, criminal, traffic, and children’s codes. Officials noted that the current criminal code may not be applicable to address new uses of technology to commit crime. The children’s code was most recently revised in 2010 and now addresses gang-related offenses, according to officials. Some procedural guidance is provided by legislation, but the tribal court does not have formal rules of criminal procedures since the court has not been granted authority to promulgate such rules. However, officials explained that the tribal court has developed an administrative order and understanding between parties for some rules. The court has not established rules of evidence; although it will occasionally incorporate state or federal rules of evidence as permitted by the criminal code. Officials describe the court as modern because it is modeled after the state of Arizona’s judicial system. The court system is composed of a tribal court, children’s court, and appellate court. The children’s court was officially established by statute in 1983. Gila River has two courthouses: a main court located in Sacaton, Arizona, and another located in Laveen, Arizona. The Chief Judge and five Associate Judges are elected by tribal members to the general jurisdiction court for 3-year terms. Additionally, two judges are appointed to the children’s court by the Tribal Council for 4-year terms. The general jurisdiction court consists of six elected judicial positions with all judges up for election at the same time. Judges must be a member of the tribe and be at least 25 years old, among other requirements. Certain residency requirements must also be met. The Tribal Council can remove a judge from office for any reason it deems cause for removal. One of the eight judges in the tribal court is law-trained; however, there are no requirements that judges are to be law-trained or licensed by a state or tribal bar association. Public defenders and prosecutors are required to be law-trained and licensed by a state bar association. The tribe has six public defenders and nine prosecutors. Criminal cases accounts for the majority of the tribal court’s caseload. For fiscal years 2008 through 2010, the tribal government funded at least 90 percent of the Gila River Indian Community Court, and the court did not receive any funding from BIA. According to tribal court officials, the court was awarded $13,000 in fiscal years 2008 and 2009 through the Juvenile Accountability Block Grant (JABG)—a grant program that is administered by Office of Juvenile Justice and Delinquency Prevention within DOJ. In fiscal year 2009, the tribal court was awarded $49,977 in grant funding under DOJ’s Justice and Mental Health Collaboration Program. Further, in fiscal year 2010, the Gila River court system was awarded $499,586 in grant funding as part of DOJ’s Coordinated Tribal Assistance Solicitation. The Pueblo of Isleta covers 331 square miles in New Mexico and is between the District of Columbia and Rhode Island in size. Of the estimated 3,496 enrolled members of the pueblo, 58 percent, or 2,013 live on the pueblo’s lands. The most recent revision to the constitution of the Pueblo of Isleta was adopted in 1991; however, according to tribal officials, Isleta has efforts underway to amend its constitution. In an effort to help address concerns about the court’s perceived lack of autonomy, according to Isleta officials, the Tribal Council established the Judicial Law and Order Committee to conduct a review of the constitution that includes examining the authorities of each branch of tribal government. The Pueblo of Isleta’s Law and Order Code was first adopted in 1965 and revised in 2008. The Tribal Council established a committee to recommend amendments regarding the code to the Council. The Pueblo of Isleta’s court system is composed of a tribal and appellate court. The tribal court is presided over by one or more judges and has jurisdiction over all criminal and civil matters articulated in the Law and Order Code. The majority of the court’s cases are adjudicated by applying federal or state law; however, the court seeks first to apply traditional law in cases where it may be applicable. The Tribal Council serves as the appellate court, and appeals are granted as a matter of right. However, the council may delegate its appellate authority to an appeal committee, appellate judge, or other appellate body established by the council. The constitution holds that all appeals decisions are final. Judges are appointed by the tribal governor with the concurrence of a two- thirds majority of the council. According to the constitution, the Tribal Council is to prescribe the qualifications and terms of office for judges. The constitution states that judges’ salaries may not be modified during the judges’ term in office. The council is currently drafting an ordinance establishing qualifications and salaries for judges. Those convicted of felonies are not eligible to serve as a judge. Judges can be removed from office after a hearing and a two-thirds vote of the full council. Because of funding limitations, according to officials, criminal investigators also serve as tribal prosecutors. Data about the court’s caseload for 2008 through 2010 are not reported here as we were not able to obtain this information from the tribe. BIA told us that it distributed $76,923, $128,279, and $99,071 in fiscal years 2008, 2009, and 2010, respectively. We were not able to obtain information from the tribe on how much of the funding was provided to the tribal court. Our review of DOJ grants awarded under the Tribal Court Assistance Program showed that the Pueblo of Isleta did not receive any grant funding for tribal courts initiatives for fiscal years 2008 through 2010. The Pueblo of Laguna reservation covers 779 square miles in New Mexico and is between the District of Columbia and Rhode Island in size. Of the estimated 8,413 enrolled members in the pueblo, 4,315 live on or near the pueblo’s lands; Laguna’s total population, including nonpueblo members, is estimated at 5,352. The Pueblo of Laguna’s constitution, adopted in 1908, empowered the pueblo’s Governor and certain members of the Tribal Council to function as the pueblo’s court. A subsequent version of the constitution, adopted in 1949, maintained this judicial structure. In 1958, the pueblo amended its constitution and thereby vested the Pueblo’s judicial power in the Pueblo’s tribal court, and in 1984, another constitutional amendment vested the pueblo’s judicial power in the pueblo’s tribal court and in an appellate court. Currently, the pueblo’s Governor and certain members of the Tribal Council serve as the pueblo’s appellate court, according to tribal officials. The pueblo has a written criminal code that was enacted in 1999, according to officials. The Tribal Secretary is responsible for keeping ordinances enacted by the Tribal Council. Revisions to the criminal code were pending adoption by the Tribal Council as of October 2010. The pueblo is in the process of adopting rules of judicial conduct and criminal procedure. The Pueblo of Laguna’s court system combines aspects of modern and traditional courts. The court relies on the written codes and laws of the pueblo, but they may also defer to the pueblo’s traditions, when possible. The pueblo’s court system includes a tribal court that adjudicates both civil and criminal matters, a juvenile court, and an appellate court that reviews cases from the lower courts. The appellate court is composed of the Governor and certain members of the Pueblo Council, though this composition of the appellate court is not provided for by constitution or code; rather it is to be established by ordinances passed by the Pueblo Council. Judges must be law-trained, have a state bar license, and must have at least 1 year of judicial experience or related law practice, among other things. Judges are appointed by the Tribal Council for a term that does not exceed 3 years, and may be removed from office if convicted of a felony or if found to have grossly neglected the duties of the office. The Pueblo of Laguna’s court system employs one full-time contract judge and three part-time contract judges. In addition, the tribe employs two prosecutors, and a public defender, among other staff. Traffic offenses, which are not reported in table 7 below, account for a large portion of the court’s activity and are considered criminal offenses. For example, there were 2,685 traffic cases opened in 2009. The Pueblo of Laguna court system’s main funding sources are the tribal government and funding from the BIA. Additionally, in fiscal year 2010 the Pueblo of Laguna was awarded $350,000 for tribal courts initiatives under DOJ’s Coordinated Tribal Assistance Solicitation grant program. The Navajo Nation’s land area totals 24,097 square miles and is mostly situated in Arizona though its boundaries extends into parts of New Mexico and Utah. The reservation is between Maryland and West Virginia in size. Of the estimated 292,023 enrolled members of the Navajo Nation, approximately 234,124, or about 80 percent, live on the reservation. The Navajo Nation does not have a written constitution. However, the duties of the court system are documented in the Navajo Nation Codes. The tribal court was established in 1959. The Navajo Nation criminal code was created in 1959 and has been amended as necessary. The Legislative Council, within the legislative branch, is responsible for updating the code. The court system has rules of judicial conduct, criminal procedure, as well as rules of evidence. Officials described the Navajo Nation court system as a modern system that continues to embody Navajo customs and traditions. The Chief Justice is the administrator of the judicial branch, which consists of 10 District Courts, the Supreme Court of the Navajo Nation, and other courts that may be created by the Navajo Nation Council. The Navajo Nation Supreme Court comprises one Chief Justice and two Associate Justices. The President of the Navajo Nation appoints Judges and Justices, who are appointed for a 2-year probation period. The appointees are selected from a panel recommended by the Judicial Committee of the Navajo Nation Council. After 2 years, the Judicial Committee can recommend a permanent appointment. If the Judge or Justice is recommended, the President submits the name to the Navajo Nation Council for confirmation. There are no term lengths; however, judges can be removed for cause. All judicial appointments must meet certain qualifications, including a higher education degree, preferably a law degree, and have work experience in law-related fields and a working knowledge of Navajo, state, and federal laws. Judges must be a member of the Navajo Nation Bar Association. Only members in good-standing with the Navajo Nation Bar Association, including public defenders and prosecutors can provide legal representation in the court system. The data provided in table 9 below comprises caseload information from the 10 District Courts, Family Courts, Probation, Peacemaking, and Supreme Court. As shown in the table below, criminal offenses account for much of the court’s activity. The Navajo Nation judicial branch is funded primarily by the tribal government. It is important to note that the funding supports the operations of the 10 districts courts, among other courts within the judiciary branch of the Navajo Nation. The Pine Ridge Indian Reservation of the Oglala Sioux Tribe covers 3,466 square miles in Southwest South Dakota, and is between Delaware and Connecticut in size. Of the estimated 47,000 enrolled members of the tribe, an estimated 29,000 Indian people live on the reservation. The Oglala Sioux Tribe’s court system was established by the tribe’s constitution in 1936. A 2008 amendment to the tribe’s constitution vests the tribe’s judicial power in one Supreme Court and in other inferior tribal courts established by the Tribal Council. As amended, the constitution provides that the tribe’s judiciary is independent from the legislative and executive branches of government. The Judiciary Committee of the Tribal Council oversees the administrative function of the court. In September 2002, the Oglala Sioux Tribal Council passed an ordinance to adopt its Criminal Offenses Code. In addition, the Oglala Sioux Tribe has adopted criminal procedures and court rules, which includes a judicial code of ethics. According to court officials, the tribal court generally applies federal rules of evidence. Further, the Tribal Council, through Judiciary Committee, is responsible for maintaining and updating the Criminal Offenses Code. he Oglala Sioux Tribe’s court system combines aspects of modern and T traditional approaches to administer justice, and is composed of the Supreme Court, a tribal court, and a juvenile court. The Supreme Cou has appellate jurisdiction, and is composed of a Chief Justice, two Associate Justices, and one Alternate Justice. Given the vast size o f the reservation, the tribe operates two courthouses, which are located in Pin Ridge, South Dakota and Kyle, South Dakota. The Oglala Sioux Tribe’s court system comprises a Chief Judge, associate judges, and Supreme Court justices. The Chief Judge of inferior courts, who oversees the inferior courts, must be law-trained and bar-licensed in any state or federal jurisdiction, and is elected by members of the tribe for a 4-year term. Justices of the Supreme Court must be law-trained and bar- licensed in any state or federal jurisdiction. They are appointed by the Tribal Council for 6-year terms. Any judge may be removed by a two-th vote of the Tribal Council for unethical judicial conduct, persistent failure to perform judicial duties, or gross misconduct that is clearly prejudicial to the administration of justice, among other things. The Oglala Sioux Tribe’s court system employed a Chief Judge, three associate judges, and two Supreme Court justices. The Oglala Sioux Attorney General’s Office employed four tribal prosecutors—one of w is law-trained and bar licensed. Officials estimated that in 2009, there were approximately 1,245 civil cases and 7,470 criminal cases. Additional data about the court’s caseload for fiscal years 2008 through 2010 are not reported as we were not able to obtain this information from the tribe. Based on data provided by the tribe, the Oglala Sioux court system did not receive any funding from the tribal government for fiscal years 2008 through 2010. Rather, the main source of funding was from BIA. The Pueblo of Pojoaque covers 21 square miles in New Mexico, and is smaller in size than the District of Columbia. Of the estimated 417 enrolled members of the pueblo, an estimated 325 enrolled members live on the pueblo’s lands. The Pueblo of Pojoaque has not adopted a constitution, and, according to a court official, the tribal government operates in a traditional manner. From 1932 to 1978, the Pueblo of Pojoaque’s Tribal Court operated according to tradition. For example, the pueblo’s Governor or the Tribal Council served as the tribal court. In 1978, the tribal code formally established a court system. There are no distinct branches of government within the Pueblo of Pojoaque and a court official stated that the Tribal Council does not intervene in individual cases before the court. When the tribal court has concerns about the direction of the Tribal Council regarding court matters, such concerns are discussed openly at Tribal Council meetings and resolutions are passed and incorporated in the Tribal Law and Order Code, as needed. According to a court official, the Pueblo of Pojoaque’s Tribal Law and Order Code was adopted in 1978. One of the court officials explained that the court’s judges are responsible for suggesting code revisions to the Tribal Council, and that the Tribal Council amends the code by resolutions. Further, complete copies of the Tribal Law and Order Code are made available through the court. The Tribal Law and Order Code includes a criminal code as well as basic rules of procedure and evidence as many of the parties appearing before the court typically advocate on their own behalf rather than being represented by an attorney. The court system has adopted rules of judicial conduct, and, pursuant to the law and order code, judges are permitted to defer to either state or federal rules of procedure or evidence, and, according to the Chief Judge, this option is often exercised when both parties appearing before the court have legal representation. The Pueblo of Pojoaque’s court system combines aspects of modern and traditional courts, and includes a tribal court, a juvenile court, and traditional methods of dispute resolution. The Tribal Council serves as the pueblo’s appellate court. The Pueblo of Pojoaque’s court system includes two types of judges—a Chief Judge and judges pro tempore—and the qualifications for these positions are identical. Judges are appointed by the Tribal Council and serve at the pleasure of the Pueblo Council and the Tribal Governor. Though there are no set educational requirements for judges, prospective judges who do not have a law degree must complete a specific training course in judicial proceedings within 6 months after being appointed as a judge. Age requirements and a background interview also apply. Given the small population of the pueblo, the Tribal Council prohibits judges, who are enrolled members of the pueblo, from hearing cases of other enrolled members, according to a court official. The Pueblo of Pojoaque court system employed one full-time Chief Judge, one part-time judge pro tempore; two contract judges pro tempore, as needed; one part-time court clerk; and one full-time court and traffic court clerk. Tribal police, who are not law-trained, serve as prosecutors. The caseload data reported below in table 11 does not reflect the number of civil and criminal matters that are resolved through traditional means and mediation. Traffic violations, which are not included in the table below, account for much of the court’s activity. For example, in 2009, there were 7,316 traffic citations docketed, of which 825 resulted in a court hearing. The Pueblo of Pojoaque court system’s main funding sources are the tribal government and BIA funding. Generally, for fiscal years 2009 and 2010, the BIA funding accounted for about 30 percent of the court’s total funding. The Rosebud Indian Reservation of the Rosebud Sioux Tribe covers 1,971 square miles in south-central South Dakota, as shown in figure 11 below, and is between Rhode Island and Delaware in size. Of the estimated 29,710 enrolled members of the tribe, approximately 85 percent, or 25,254, live on the reservation. The Rosebud Sioux Tribe’s court was established in 1975, according to officials, replacing the Court of Indian Offenses administered by BIA. A 2007 amendment to the tribe’s constitution, which was originally adopted in 1935, established the tribal court as separate and distinct from the legislative and executive branches of the tribal government and established the Rosebud Sioux Tribe Supreme Court as the tribe’s appellate court. The Tribal Council’s Judiciary Committee helps to oversee the administration of court. The Rosebud Sioux Tribe’s Law and Order Code was adopted in 1986 and is available by request from the Tribal Secretary’s office, although tribal court officials indicated that the status of the code has been an ongoing concern. The Law and Order Code contains a criminal code and rules of criminal procedure. Additionally, officials noted that the code adopts by reference federal rules of evidence and requires tribal judges to conform their conduct to the Code of Judicial Conduct as adopted by the American Bar Association. The Rosebud Sioux Tribe’s court system is composed of a tribal court, a juvenile court, a limited mediation court, and an appellate court. While the court applies traditional methods of dispute resolution, officials described the court system as mostly modern in that it is modeled on federal and state court systems and applies federal rules of evidence and judicial conduct. It is traditional in that the Law and Order Code, which the courts apply, contains references to tribal customs. Further, in some cases, tribal courts include interested community members in the court proceedings. For example, in some family disputes, members of the community such as family members or concerned citizens may participate in the court process even though they are not parties appearing before the court. Decisions of the tribal court and juvenile court are subject to appellate review by the Rosebud Sioux’s Supreme Court. The Supreme Court is composed of six justices, three of whom sit as a panel to hear a case. The Rosebud Sioux Tribe’s court system includes a Chief Judge, associate judges, and Supreme Court justices. The Chief Judge must be law-trained, bar-licensed, and admitted to practice before the U.S. District Court for South Dakota. The Chief Judge is appointed by the Tribal Council for a 4- year term. Associate judges are appointed by the Tribal Council for 2-year terms, and must have a high-school education or equivalent. Further, at least one associate judge must be bilingual in English and Lakota—the tribe’s traditional language. Of the three justices in an appellate panel, two must be law-trained, bar-licensed, and admitted to practice in the U.S. District Courts of South Dakota. One may be a lay judge who must have a high-school education or equivalent. Supreme Court justices are appointed by the Tribal Council for 5-year terms. Removal of any judge or justice must be for cause after a public hearing by the Tribal Council and by a two-thirds vote of Tribal Council members present at the hearing. As of October 2010, the Rosebud Sioux Tribe’s court system employed a Chief Judge, two associate judges—one law-trained but not bar-licensed, and the other a lay judge—and four Supreme Court justices. There is one law-trained, bar-licensed tribal prosecutor, an assistant prosecutor who works mainly in juvenile court, a public defender, and an assistant public defender who works mainly in juvenile court. Additionally, in fiscal year 2010, the tribe received a DOJ grant to fund three additional attorney positions, though tribal officials stated that these positions may be difficult to fill because of recruitment and retention challenges. Tribal officials stated that the numbers of prosecutors and public defenders is inadequate for the tribes’ caseload and affects the tribe’s ability to effectively administer justice. Criminal offenses account for much of the court’s caseload. Traffic violations are considered criminal offenses; however, they are not included in the data in the table below. Based on data provided by officials for fiscal years 2008 through 2010, the Rosebud Sioux Tribe court system is primarily funded by BIA, although the court received funding from other sources. The Standing Rock Reservation covers 3,654 square miles in south-central North Dakota and north-central South Dakota, and is between Connecticut and Delaware in size. Of the estimated 14,914 enrolled members of the tribe, 8,656 live on the reservation. The Standing Rock Sioux Tribe Constitution, adopted in 1959, empowers the Tribal Council to establish courts on the reservation and define those courts’ duties and powers. Exercising this constitutional authority, the Standing Rock Sioux Tribal Council established the tribal court system. Further, the constitution vests the tribe’s judicial authority in a Supreme Court and in a Tribal Court and specifies the process by which judges for these courts would be selected and removed, as described below. Subsequent amendments to the tribe’s constitution did not alter these provisions. The Standing Rock Sioux Tribe’s Code of Justice addresses criminal offenses, criminal procedure, and civil procedure, among other things. In addition, the Tribe’s Rules of Court include provisions regarding civil procedure, criminal procedure, rules of evidence, among other things. However, court officials reported challenges in keeping the code current and stated that they do not have access to the entire code. The court system is composed of a tribal court, a children’s court, and a Supreme Court that has appellate jurisdiction over the tribe’s other courts. The Supreme Court is composed of a chief justice and two associate justices. The Code of Justice articulates the composition of the court as well as the qualifications, selection, and removal of judges. Specifically, the Supreme Court is to include a Chief Justice and Associate Justices. Additionally, the tribal court is to include a Chief Judge, Associate Chief Judge, and Associate Judges. The Chief Justice, Chief Judge, and Associate Chief Judge must be law-trained and bar-licensed. Associate justices and judges must have at least a high-school diploma or its equivalent. All justices and judges are appointed by the Tribal Council and face a retention election at the tribe’s next election. Justices and judges retained then serve 4-year terms and may be removed from office for cause by a two-thirds vote of the Tribal Council. The Standing Rock Sioux Tribe’s court system employed three appellate judges, four tribal court judges, six court clerks, two prosecutors, one public defender, among other staff. Of the four tribal court judges, three are bar-licensed and one is law-trained but not bar-licensed. Of the three appellate judges, two are bar-licensed and one is a lay judge. Criminal offenses account for much of the court’s caseload. Traffic violations are considered criminal offenses; however, they are not included in the data in the table below. For fiscal years 2008 through 2010, the Standing Rock Sioux Tribal Court did not receive any funding from the tribal government and federal funding is the primary source of funding for the court, based on data provided by officials. The BIA funding has remained unchanged during this time. Additionally, officials told us that they received grant funding from the South Dakota Department of Corrections totaling $15,000 and $25,000 in fiscal years 2009 and 2010, respectively. The Pueblo of Taos covers 156 square miles north of Santa Fe, New Mexico, and is between the District of Columbia and Rhode Island in size. Of the estimated 2,500 enrolled members of the pueblo, approximately 1,800 members live on the pueblo’s lands. The Pueblo of Taos does not have a written constitution and has not established a separate judicial branch within its tribal government. Rather, according to officials, the pueblo has an unwritten social order that dates back to the pueblo’s origins and continues to be practiced and adhered to. Officials noted that they are exploring the possibility of establishing three distinct branches within the tribal government that would include a judicial branch. The Pueblo is governed by a Tribal Governor and a War Chief, both of whom are appointed by the Tribal Council for a 1-year term and operate the pueblo’s traditional courts. In 1986, the Tribal Council adopted the pueblo’s law and order code. Tribal officials explained that the tribal court is responsible for updating the criminal code and the Tribal Council approves amendments or revisions. The Pueblo has not fully revised the code since its adoption but has efforts underway to update and revise the criminal code. The tribal court does not have rules of judicial conduct or rules of evidence. However, the tribal court applies federal rules of evidence and New Mexico state rules regarding judicial conduct. Officials noted that rules of judicial conduct and rules of evidence are to be developed as part of the law and order code update. The code is available in hard copy only, and is generally made available to parties appearing before the court. Officials expect that the law and order code will be available in electronic format once revisions are completed. The Pueblo of Taos has two traditional courts and one tribal court. The Lieutenant Governor of the tribe serves as a Traditional Court Judge to hear both civil matters, such as contract violations, and family disputes. The War Chief also serves as a Traditional Court Judge and generally hears civil cases that involve disputes over land, natural resources, and fish and wildlife. The tribal court was established in the late-1980s to provide tribal members an alternative dispute resolution forum and to address the changes in the types of crimes being committed on the pueblo’s lands. Further, according to officials, the tribal court is intended to supplement rather than replace the traditional courts. Officials explained that tribal members may choose to have their case heard before the traditional or tribal court; however, once the case is filed with either court, the parties cannot then request a transfer to the other court. The Pueblo of Taos does not have an appellate court. However, appeals can be made to the Traditional Court Judge, usually the Lieutenant Governor, to challenge tribal court decisions. In the future, the Pueblo of Taos may use the Southwest Intertribal Court of Appeals. The Chief Judge is retained under contract, and the contract can be issued for up to 12 months. The Pueblo of Taos has not yet established requirements regarding selection, removal, and qualifications of judges, but expects to do so in the future. The pueblo employs one tribal court judge for the modern court, who is not bar-licensed. Additionally, the pueblo does not have pubic defenders or prosecutors; rather, the police, who are not law-trained, serve as prosecutors in addition to their patrol duties. Criminal cases account for much of the court’s activity for fiscal years 2008 through 2010. Based on data provided by officials for fiscal years 2008 through 2010, with the exception of fiscal year 2009, BIA funding accounted for much of the court system’s entire budget. The Fort Berthold Reservation of the Three Affiliated Tribes covers 1,578 square miles in northwest North Dakota, and is between Rhode Island and Delaware in size. Of the 11,993 enrolled members of the tribe, about half live on the reservation. According to officials, the Three Affiliated Tribe’s court system was established by the Tribal Business Council in the 1930s. Further, officials estimated that in the 1990s, an amendment to the constitution established the court’s authority. The Tribal Business Council has a Judicial Committee, composed of tribal council members, that regularly reviews court operations such as funding, staffing, and evaluation, among other things. The Three Affiliated Tribes have a tribal code that, according to a court official, was developed in 1935. The tribal code contains a criminal code, although officials stated that the court does not have rules of criminal procedure. The code also has a section that addresses federal rules of evidence. According to court officials, it is not always clear what the current law is because the tribal code is not kept up-to-date. The Three Affiliated Tribes’ court system combines aspects of modern and traditional courts. The court is modern in that it applies the tribal code; the court is traditional in that tribal members and court staff are personally acquainted, tribal members who appear before the court readily accept tribal laws that regulate conduct on the reservation, and Indian language is sometimes used in court. The court system includes a tribal court and a juvenile court. Appeals from either of these courts are addressed by an intertribal appeals court, the Northern Plains Intertribal Court. The Three Affiliated Tribe’s court system includes a Chief Judge and associate judges, also called magistrate judges. Court officials reported that all judges must be law-trained, bar-licensed members of the tribes. However, at their discretion, the Tribal Council may overrule the requirement that judges must be members of the tribe. The Chief Judge is elected tribal members for a 4-year term. Associate Judges are appointed by the Tribal Council for 1-year terms. All judges may be removed by the Tribal Council for cause. As of November 2010, the Three Affiliated Tribes’ court system employed a law-trained Chief Judge, two law-trained associate judges, a prosecutor, and a public defender, among other staff. Prosecutors are not required to be law-trained or bar-licensed, according to officials. Criminal offenses account for the majority of the court’s caseload. Traffic violations are considered civil matters; however, they are not included in the data in the table below. Based on data provided by the tribe, the Three Affiliated Tribes court systems’ main funding sources are the tribal government and BIA. The Tohono O’odham Nation covers 4,456 square miles within Arizona, although it encompasses land on both sides of the U.S.-Mexico border. Tohono O’odham Nation is between Delaware and Connecticut in size. Of the 29,974 members of Tohono O’odham Nation, approximately 13,035, or 43 percent, live on the reservation. The Tohono O’odham Nation adopted its most recent constitution in 1986, which replaced an earlier constitution from 1937. The constitution established a judicial branch and articulates the powers and duties of the court. The judicial branch is an independent branch within the tribal government, according to officials. The Tohono O’odham Nation’s criminal code was adopted in 1985 and subsequently has been updated by the legislative branch with input from the Tohono O’odham Prosecutor’s Office and Attorney General’s Office. The most updated code is available on the tribe’s website. The judicial branch has adopted Arizona rules of criminal procedure, with modification, and has also adopted Arizona rules of evidence. The Tohono O’odham Nation’s court system is composed of a tribal court, an appeals court, children’s court, family court, traffic court, and criminal court. The chief judge is the constitutionally-mandated administrative head of the judicial branch and oversees the operations and decisions of the court. Appellate cases are heard by a three-judge panel, designated by the chief judge. In order to hear the appeal, the appellate judges must not have presided over the original case. Appeals panel decisions are final. The legislative branch of Tohono O’odham Nation is responsible for the selection of tribal court judges. The judges of Tohono O’odham Nation select a chief judge from among themselves, who serves as the chief administrative officer for the judiciary and serves in that capacity for 2 years. Potential judges pro tempore are referred by the chief judge to the Judiciary Committee of the Tribal Council. All judges are appointed by the legislative branch. The six full time judges mandated by the constitution are appointed for 6-year terms that are staggered. However, judges may be reappointed to the bench upon application. Judges pro tempore are typically appointed to a term of no more than 6 years. Judicial qualifications, which changed in 2008, include preferences for members of federally-recognized Indian tribes, with first preference given to qualified, enrolled members of the Tohono O’odham Nation. Further, persons with felony or recent misdemeanor convictions are not eligible. Finally, the candidate must be either a bar-admitted, Indian-law experienced attorney, or possess a bachelor’s degree and have work experience and training in judicial or law-related fields. Judges may be removed by vote of the Legislative Council upon the petition of a tribal member for felony convictions, malfeasance in office, among other things. Tohono O’odham Nation has 6 full-time judges, 6 prosecutors, 6 full-time public defenders, and approximately 100 support staff, among other staff. Criminal cases accounted for more than 85 percent of the court’s docket as shown in table 20 below. Tohono O’odham Nation’s court was funded, for the most part, by the tribal government during fiscal years 2008 through 2010, though the tribe received BIA funding. Additionally, a court official explained that in fiscal year 2006, DOJ awarded an Indian Alcohol and Substance Abuse grant totaling $500,000 that permitted the tribe to implement the grant over a 3- year period through fiscal year 2009. In addition to the contact named above, William Crocker and Glenn Davis, Assistant Directors and Candice Wright, analyst-in-charge, managed this review. Ami Ballenger and Christoph Hoashi-Erhardt made significant contributions to the work. Christine Davis and Thomas Lombardi provided significant legal support and analysis. David Alexander provided significant assistance with design and methodology. Katherine Davis provided assistance in report preparation. Melissa Bogar and Rebecca Rygg made contributions to the work during the final phase of the review.
The Department of Justice (DOJ) reports from the latest available data that from 1992 to 2001 American Indians experienced violent crimes at more than twice the national rate. The Department of the Interior (DOI) and DOJ provide support to federally recognized tribes to address tribal justice issues. Upon request, GAO analyzed (1) the challenges facing tribes in adjudicating Indian country crimes and what federal efforts exist to help address these challenges and (2) the extent to which DOI and DOJ have collaborated with each other to support tribal justice systems. To do so, GAO interviewed tribal justice officials at 12 tribes in four states and reviewed laws, including the Tribal Law and Order Act of 2010, to identify federal efforts to assist tribes. GAO selected these tribes based on court structure, among other factors. Although the results cannot be generalized, they provided useful perspectives about the challenges various tribes face in adjudicating crime in Indian country. GAO also compared DOI and DOJ's efforts against practices that can help enhance and sustain collaboration among federal agencies and standards for internal control in the federal government. The 12 tribes GAO visited reported several challenges in adjudicating crimes in Indian country, but multiple federal efforts exist to help address some of these challenges. For example, tribes only have jurisdiction to prosecute crimes committed by Indian offenders in Indian country. Also, until the Tribal Law and Order Act of 2010 (the Act) was passed in July 2010, tribes could only sentence those found guilty to up to 1 year in jail per offense. Lacking further jurisdiction and sentencing authority, tribes rely on the U.S. Attorneys' Offices (USAO) to prosecute crime in Indian country. Generally, the tribes GAO visited reported challenges in obtaining information on prosecutions from USAOs in a timely manner. For example, tribes reported they experienced delays in obtaining information when a USAO declines to prosecute a case; these delays may affect tribes' ability to pursue prosecution in tribal court before their statute of limitations expires. USAOs are working with tribes to improve timely notification about declinations. DOI and the tribes GAO visited also reported overcrowding at tribal detention facilities. In some instances, tribes may have to contract with other detention facilities, which can be costly. Multiple federal efforts exist to help address these challenges. For example, the Act authorizes tribes to sentence convicted offenders for up to 3 years imprisonment under certain circumstances, and encourages DOJ to appoint tribal prosecutors to assist in prosecuting Indian country criminal matters in federal court. Federal efforts also include developing a pilot program to house, in federal prison, up to 100 Indian offenders convicted in tribal courts, given the shortage of tribal detention space. DOI, through its Bureau of Indian Affairs (BIA), and DOJ components have taken action to coordinate their efforts to support tribal court and tribal detention programs; however, the two agencies could enhance their coordination on tribal courts by strengthening their information sharing efforts. BIA and DOJ have begun to establish task forces designed to facilitate coordination on tribal court and tribal detention initiatives, but more focus has been given to coordination on tribal detention programs. For example, at the program level, BIA and DOJ have established procedures to share information when DOJ plans to construct tribal detention facilities. This helps ensure that BIA is prepared to assume responsibility to staff and operate tribal detention facilities that DOJ constructs and in turn minimizes potential waste. In contrast, BIA and DOJ have not implemented similar information sharing and coordination mechanisms for their shared activities to enhance the capacity of tribal courts to administer justice. For example, BIA has not shared information with DOJ about its assessments of tribal courts. Further, both agencies provide training and technical assistance to tribal courts; however, they are unaware as to whether there could be unnecessary duplication. Developing mechanisms to identify and share information related to tribal courts could yield potential benefits in terms of minimizing unnecessary duplication and leveraging the expertise and capacities that each agency brings. GAO recommends that the Secretary of the Interior and the Attorney General direct the relevant DOI and DOJ programs to develop mechanisms to identify and share information related to tribal courts. DOI and DOJ concurred with our recommendation.
Information technology should enable government to better serve the American people. However, according to OMB, despite spending more than $600 billion on IT over the past decade, the federal government has achieved little of the productivity improvements that private industry has realized from IT. Too often, federal IT projects run over budget, behind schedule, or fail to deliver promised functionality. In combating this problem, proper oversight is critical. Both OMB and federal agencies have key roles and responsibilities for overseeing IT investment management and OMB is responsible for working with agencies to ensure investments are appropriately planned and justified. However, as we have described in numerous reports, although a variety of best practice documentation exists to guide their successful acquisition, federal IT projects too frequently incur cost overruns and schedule slippages while contributing little to mission- related outcomes. IT acquisition best practices have been developed by both industry and the federal government. For example, the Software Engineering Institute has developed highly regarded and widely used guidance on best practices, such as requirements development and management, risk management, configuration management, validation and verification, and project monitoring and control. This guidance also describes disciplined project management practices that call for the development of project details, such as objectives, scope of work, schedules, costs, and requirements against which projects can be managed and executed. In the federal government, GAO’s own research in IT management best practices led to the development of the Information Technology Investment Management Framework, which describes essential and complementary IT investment management disciplines, such as oversight of system development and acquisition management, and organizes them into a set of critical processes for successful investments. This guidance further describes five progressive stages of maturity that an agency can achieve in its investment management capabilities, and was developed on the basis of our research into the IT investment management practices of leading private- and public-sector organizations. GAO has also identified opportunities to improve the role played by Chief Information Officers (CIO) in IT management. In noting that federal law provides CIOs with adequate authority to manage IT for their agencies, GAO also reported on limitations that impeded their ability to exercise this authority. Specifically, CIOs have not always had sufficient control over IT investments; more consistent implementation of CIOs’ authority could enhance their effectiveness. Congress has also enacted legislation that reflects IT management best practices. For example, the Clinger-Cohen Act of 1996, which was informed by GAO best practice recommendations, requires federal agencies to focus more on the results they have achieved through IT investments, while concurrently improving their IT acquisition processes. Specifically, the act requires agency heads to implement a process to maximize the value of the agency’s IT investments and assess, manage, and evaluate the risks of its IT acquisitions. Further, the act establishes CIOs to advise and assist agency heads in carrying out these responsibilities. The act also requires OMB to encourage agencies to develop and use best practices in IT acquisition. Additionally, the E-Government Act of 2002 established a CIO Council, which is led by the Federal CIO, to be the principal interagency forum for improving agency practices related to the development, acquisition, and management of information resources, including sharing best practices. Although these best practices and legislation can have a positive impact on major IT programs, we have previously testified that the federal government continues to invest in numerous failed and troubled projects. We stated that while OMB’s and agencies’ recent efforts had resulted in greater transparency and oversight of federal spending, continued leadership and attention was necessary to build on the progress that had been made. In an effort to end the recurring cycle of failed IT projects, this committee has introduced legislation to improve IT acquisition management. Among other things, this legislation would eliminate duplication and waste in IT acquisition, and increase the authority of agency CIOs, strengthen and streamline IT acquisition management practices. We have previously testified in support of this legislation. OMB plays a key role in helping federal agencies manage their investments by working with them to better plan, justify, and determine how much they need to spend on projects and how to manage approved projects. In June 2009, OMB established the IT Dashboard to improve the transparency into and oversight of agencies’ IT investments. According to OMB officials, agency CIOs are required to update each major investment in the IT Dashboard with a rating based on the CIO’s evaluation of certain aspects of the investment, such as risk management, requirements management, contractor oversight, and human capital. According to OMB, these data are intended to provide a near real-time perspective of the performance of these investments, as well as a historical perspective. Further, the public display of these data is intended to allow OMB, congressional and other oversight bodies, and the general public to hold government agencies accountable for results and progress. In January 2010, the Federal CIO began leading TechStat sessions— reviews of selected IT investments between OMB and agency leadership—to increase accountability and transparency and improve performance. OMB has identified factors that may result in an investment being selected for a TechStat session, such as—but not limited to— evidence of (1) poor performance; (2) duplication with other systems or projects; (3) unmitigated risks; and (4) misalignment with policies and best practices. OMB reported that as of April 2013, 79 TechStat sessions had been held with federal agencies. According to OMB, these sessions enabled the government to improve or terminate IT investments that were experiencing performance problems. For example, in June 2010 the Federal CIO led a TechStat on the National Archives and Records Administration’s (NARA) Electronic Records Archives investment that resulted in six corrective actions, including halting fiscal year 2012 development funding pending the completion of a strategic plan. Similarly, in January 2011, we reported that NARA had not been positioned to identify potential cost and schedule problems early, and had not been able to take timely actions to correct problems, delays, and cost increases on this system acquisition program. Moreover, we estimated that the program would likely overrun costs by between $205 and $405 million if the agency completed the program as originally designed. We made multiple recommendations to the Archivist of the United States, including establishing a comprehensive plan for all remaining work, improving the accuracy of key performance reports, and engaging executive leadership in correcting negative performance trends. Drawing on the visibility into federal IT investments provided by the IT Dashboard and TechStat sessions, in December 2010, OMB issued a plan to reform IT management throughout the federal government over an 18-month time frame. Among other things, the plan noted the goal of turning around or terminating at least one-third of underperforming projects by June 2012. The plan contained two high-level objectives: achieving operational efficiency, and effectively managing large-scale IT programs. To achieve operational efficiencies, the plan outlined actions required to adopt cloud solutions and leverage shared services. To effectively manage IT acquisitions, the plan identified key actions, such as improving accountability and governance and aligning acquisition processes with the technology cycle. Our April 2012 report on the federal government’s progress on implementing the plan found that not all action items had been completed. These findings are discussed in greater detail later in the next section. We have previously reported that OMB has taken significant steps to enhance the oversight, transparency, and accountability of federal IT investments by creating its IT Dashboard, by improving the accuracy of investment ratings, and by creating a plan to reform federal IT. However, we also found issues with the accuracy and data reliability of cost and schedule data, and recommended steps that OMB should take to improve these data. In July 2010, we reported that the cost and schedule ratings on OMB’s Dashboard were not always accurate for the investments we reviewed, because these ratings did not take into consideration current performance. As a result, the ratings were based on outdated information. We recommended that OMB report on its planned changes to the Dashboard to improve the accuracy of performance information and provide guidance to agencies to standardize milestone reporting. OMB agreed with our recommendations and, as a result, updated the Dashboard’s cost and schedule calculations to include both ongoing and completed activities. Similarly, our report in March 2011 noted that OMB had initiated several efforts to increase the Dashboard’s value as an oversight tool and had used its data to improve federal IT management. However, we also reported that agency practices and the Dashboard’s calculations contributed to inaccuracies in the reported investment performance data. For instance, we found missing data submissions or erroneous data at each of the five agencies we reviewed, along with instances of inconsistent program baselines and unreliable source data. As a result, we recommended that the agencies take steps to improve the accuracy and reliability of their Dashboard information, and that OMB improve how it rates investments relative to current performance and schedule variance. Most agencies generally concurred with our recommendations and three have taken steps to address them. OMB agreed with our recommendation for improving ratings for schedule variance. It disagreed with our recommendation to improve how it reflects current performance in cost and schedule ratings, but more recently made changes to Dashboard calculations to address this while also noting challenges in comprehensively evaluating cost and schedule data for these investments. Subsequently, in November 2011, we noted that the accuracy of investment cost and schedule ratings had improved since our July 2010 report because OMB refined the Dashboard’s cost and schedule calculations. Most of the ratings for the eight investments we reviewed as part of our November 2011 report were accurate, although we noted that more could be done to inform oversight and decision making by emphasizing recent performance in the ratings. We recommended that the General Services Administration comply with OMB’s guidance for updating its ratings when new information becomes available (including when investments are rebaselined). The agency concurred and has since taken actions to address this recommendation. Since we previously recommended that OMB improve how it rates investments, we did not make any further recommendations. Further, in April 2012, we reported that OMB and key federal agencies had made progress on implementing actions items from its plan to reform IT management, but found that there were several areas where more remained to be done. Specifically, we reviewed 10 actions and found that 3 were complete, while 7 were incomplete. For example, we found that OMB had reformed and strengthened investment review boards, but had only partially issued guidance on modular development. Accordingly, we recommended, among other things, that OMB ensure that the action items called for in the plan be completed by the responsible parties prior to the completion of the plan’s 18-month deadline of June 2012, or if the June 2012 deadline could not be met, by another clearly defined deadline. OMB agreed to complete the key action items. Finally, we reviewed OMB’s efforts to help agencies address IT projects with cost overruns, schedule delays, and performance shortfalls in June 2013. In particular, we reported that OMB used CIO ratings from the Dashboard, among other sources, to select at- risk investments for reviews known as TechStats. OMB initiated these reviews in January 2010 to further improve investment performance, and subsequently incorporated the TechStat model into its plan for reforming IT management. We reported that OMB and selected agencies had held multiple TechStat sessions but additional OMB oversight was needed to ensure that these meetings were having the appropriate impact on underperforming projects and that resulting cost savings were valid. Among other things, we recommended that OMB require agencies to address their highest- risk investments and to report on how they validated the outcomes. OMB generally agreed with our recommendations, and stated that it and the agencies were taking appropriate steps to address them. Subsequent to the launch of the Dashboard and the TechStat reviews, and to help the federal agencies address the well-documented acquisition challenges they face, in 2011, we reported on nine common factors critical to the success of IT investment acquisitions. Specifically, we reported that department officials from seven agencies each identified a successful investment acquisition, in that they best achieved their respective cost, schedule, scope, and performance goals. To identify these investments, we interviewed officials from the 10 departments with the largest planned IT budgets in order for each department to identify one mission-critical, major IT investment that best achieved its cost, schedule, scope, and performance goals. Of the 10 departments, 7 identified successful IT investments, for a total of 7 investments. Officials from the 7 investments cited a number of success factors that contributed to these investments’ success. According to federal department officials, the following seven investments (shown in table 1) best achieved their respective cost, schedule, scope, and performance goals. The estimated total life-cycle cost of the seven investments is about $5 billion. Among these seven IT investments, officials identified nine factors as critical to the success of three or more of the seven. The factors most commonly identified include active engagement of stakeholders, program staff with the necessary knowledge and skills, and senior department and agency executive support for the program. These nine critical success factors are consistent with leading industry practices for IT acquisitions. Table 2 shows how many of the investments reported the nine factors and selected examples of how agencies implemented them are discussed below. A more detailed discussion of the investments’ identification of success factors can be found in our 2011 report. Officials from all seven selected investments cited active engagement with program stakeholders—individuals or groups (including, in some cases, end users) with an interest in the success of the acquisition—as a critical factor to the success of those investments. Agency officials stated that stakeholders, among other things, reviewed contractor proposals during the procurement process, regularly attended program management office sponsored meetings, were working members of integrated project teams, and were notified of problems and concerns as soon as possible. In addition, officials from the two investments at National Nuclear Security Administration and U.S. Customs and Border Protection noted that actively engaging with stakeholders created transparency and trust, and increased the support from the stakeholders. Additionally, officials from six of the seven selected investments indicated that the knowledge and skills of the program staff were critical to the success of the program. This included knowledge of acquisitions and procurement processes, monitoring of contracts, large-scale organizational transformation, Agile software development concepts, and areas of program management such as earned value management and technical monitoring. Finally, officials from five of the seven selected investments identified having the end users test and validate the system components prior to formal end user acceptance testing for deployment as critical to the success of their program. Similar to this factor, leading guidance recommends testing selected products and product components throughout the program life cycle. Testing of functionality by end users prior to acceptance demonstrates, earlier rather than later in the program life cycle, that the functionality will fulfill its intended use. If problems are found during this testing, programs are typically positioned to make changes that are less costly and disruptive than ones made later in the life cycle would be. In summary, the expanded use of these critical IT acquisition success factors, in conjunction with industry and government best practices, should result in the more effective delivery of mission-critical systems. Further, these factors support OMB’s objective of improving the management of large-scale IT acquisitions across the federal government, and wide dissemination of these factors could complement OMB’s efforts. While OMB’s and agencies’ recent efforts have resulted in greater transparency and oversight of federal spending, continued leadership and attention are necessary to build on the progress that has been made. By improving the accuracy of information on the IT Dashboard, and holding additional TechStat reviews, management attention can be better focused on troubled projects and establishing clear action items to turn these projects around or terminate them. Further, legislation such as that proposed by this committee can play an important role in increasing the authority of agency CIOs and improving federal IT acquisition management practices. Overall, the implementation of our numerous recommendations regarding key aspects of IT acquisition management can help OMB and federal agencies continue to improve the efficiency and transparency with which IT investments are managed, in order to ensure that the federal government’s substantial investment in IT is being wisely spent. Chairman Issa, Ranking Member Cummings, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staffs have any questions about this testimony, please contact me at (202) 512-9286 or at [email protected]. Individuals who made key contributions to this testimony are Dave Hinchman (Assistant Director), Deborah Davis, Rebecca Eyler, Kaelin Kuhn, Thomas Murphy, Jamelyn Payan, and Jessica Waselkow. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The federal government reportedly plans to spend at least $82 billion on IT in fiscal year 2014. Given the scale of such planned outlays and the criticality of many of these systems to the health, economy, and security of the nation, it is important that federal agencies successfully acquire these systems—that is, ensure that the systems are acquired on time and within budget and that they deliver the expected benefits and functionality. However, GAO has previously reported and testified that federal IT projects too frequently incur cost overruns and schedule slippages while contributing little to mission-related outcomes. To help improve these efforts, OMB has launched several initiatives intended to improve the oversight and management of IT acquisitions. In addition, during the past several years GAO has issued multiple reports and testimonies on federal initiatives to acquire and improve the management of IT investments. As discussed with committee staff, GAO is testifying today on IT best practices, with a focus on the results of its report issued on the critical success factors of major IT acquisitions. To prepare this statement, GAO drew on previously published work. Information technology (IT) acquisition best practices have been developed by both industry and the federal government. For example, the Software Engineering Institute has developed highly regarded and widely used guidance on best practices, such as requirements development and management, risk management, validation and verification, and project monitoring and control. GAO's own research in IT management best practices led to the development of the Information Technology Investment Management Framework, which describes essential and complementary IT investment management disciplines, such as oversight of system development and acquisition management, and organizes them into a set of critical processes for successful investments. GAO also recently reported on the critical factors underlying successful IT acquisitions. Officials from federal agencies identified seven investments that were deemed successfully acquired in that they best achieved their respective cost, schedule, scope, and performance goals. Agency officials identified nine common factors that were critical to the success of three or more of the seven investments. Officials from all seven investments cited active engagement with program stakeholders as a critical factor to the success of those investments. Agency officials stated that stakeholders regularly attended program management office sponsored meetings; were working members of integrated project teams; and were notified of problems and concerns as soon as possible. Additionally, officials from six investments indicated that knowledge and skills of the program staff, and support from senior department and agency executives were critical to the success of their programs. Further, officials from five of the seven selected investments identified having the end users test and validate the system components prior to formal acceptance testing for deployment as critical to the success of their program. These critical factors support the Office of Management and Budget's (OMB) objective of improving the management of large-scale IT acquisitions across the federal government; wide dissemination of these factors could complement OMB's efforts. GAO has made numerous recommendations to OMB and agencies on key aspects of IT acquisition management, as well as the oversight and management of those investments.
In the context of electronic government, collaboration can be defined as a mutually beneficial and well-defined relationship entered into by two or more organizations to achieve common goals. It is an in-depth, managed relationship that brings together separate and distinct organizations into a new structure. Recent management reform efforts within the federal government have focused on collaboration as a way to reduce duplication and integrate federal provision of services to the public. Collaboration is a key theme of the President’s management agenda, published in 2002, which aims at making the federal government more focused on citizens and results. One of the key provisions of the management agenda is the expansion of electronic government. To implement this provision, OMB identified and is working on projects that address the issue of multiple federal agencies performing similar tasks that could be consolidated through e-government processes and technology. Specifically, OMB established a team, known as the E-Government Task Force, that analyzed the federal bureaucracy and identified areas of significant overlap and redundancy in how federal agencies provide services to the public. The task force found that multiple agencies were conducting redundant operations within 30 major functions and business lines in the executive branch. Further, each line of business was being performed by an average of 19 agencies, and each agency was involved in an average of 17 business lines. To address these redundancies, the task force evaluated potential projects, focusing on collaborative opportunities to integrate IT operations and simplify processes within lines of business across agencies and around citizen needs. As a result of this assessment, the task force identified a set of 25 high-profile initiatives to lead the federal government’s drive toward e-government transformation and enhanced service delivery through collaboration. As the lead agency overseeing the management of these initiatives, OMB developed a strategy for expanding electronic government, which it published in February 2002. In its strategy, OMB established a portfolio management structure to help oversee and guide the selected initiatives and facilitate a collaborative working environment for each of them. This structure includes five portfolios, each with a designated portfolio manager reporting directly to OMB’s Associate Director for IT and E-Government. The five portfolios are “government to citizen,” “government to business,” “government to government,” “internal efficiency and effectiveness,” and “cross-cutting.” Each of the 25 initiatives is assigned to one of these portfolios, according to the type of results the initiative is intended to provide. Further, for each initiative, OMB designated a specific agency to be the initiative’s “managing partner,” responsible for leading the initiative, and assigned other federal agencies as “partners” in carrying out the initiative. Figure 1 provides an overview of the e-government management structure established by OMB. Successful implementation of the 25 cross-agency e-government initiatives—resulting in reductions in redundancies and overlap of federal programs and services—requires effective collaboration. Recognizing that collaboration is challenging, the President’s budget for fiscal year 2004 highlighted the continuing need to establish a collaborative framework for cross-agency e-government initiatives. In November 2002, we reported that despite the importance placed on collaboration in OMB’s e-government strategy, less than half of the initial business cases for the OMB-sponsored initiatives addressed a strategy for successfully collaborating with other government and nongovernment entities. Based on these results, we recommended that the OMB Director ensure that managing partners of the 25 initiatives work with partner agencies to develop and document their collaborative strategies. All four of the e-government initiatives that we reviewed have met milestones for the early phases of their planned activities. For example, Web portals were established for two of the initiatives—www.geodata.gov for the Geospatial One-Stop initiative and www.BusinessLaw.gov for the Business Gateway. In addition, the Integrated Acquisition Environment initiative established an online capability that federal customers can use to access a variety of available interagency contracts. However, while the projects are continuing to make progress, some of the tasks they face are increasingly challenging, such as e-Payroll’s objective of establishing governmentwide payroll processing standards or Geospatial One-Stop’s goal of compiling a comprehensive inventory of geospatial data holdings. In July 2003, OMB refocused one initiative, the Business Gateway, which had been making slow progress on its previous objectives. OMB tied the project’s objectives and milestones more closely to the Small Business Paperwork Relief Act’s goal of reducing the burden of federal paperwork on small businesses. The goal of the e-Payroll initiative is to substantially improve federal payroll operations by standardizing them across all agencies, integrating them with other human resource functions, and making them easy to use and cost-effective. To achieve this goal, plans are to consolidate the operations of 22 existing federal payroll system providers; simplify and standardize federal payroll policies and procedures; and better integrate payroll, human resources, and finance functions across federal agencies. OPM, the managing partner for e-Payroll, chose four agencies to be providers of payroll services to all 116 executive branch agencies. The four selected providers are GSA and the Departments of Defense, Interior, and Agriculture. The initiative is divided into two major phases: (1) migrating each of the 18 nonselected payroll system providers to one of the four selected providers by September 2004 and (2) defining an enterprise architecture consistent with the Federal Enterprise Architecture model and identifying technology solutions to replace legacy systems. Figure 2 shows the partners and affected parties for the e-Payroll initiative. Of the 22 executive branch agencies that currently operate payroll systems, 6 also provide payroll services to other agencies. The four providers selected by OPM—GSA, Defense’s Defense Finance and Accounting Service, Interior’s National Business Center, and Agriculture’s National Finance Center—handle more than 70 percent of all federal civilian payroll processing and accommodating more than 190 different pay plans. According to OPM, many of the 22 current providers use custom-built systems that have been in operation for many years and need to be replaced. Two of the largest providers needing system replacement estimated the costs of implementing new systems at $46 to $600 million per system. Conversely, OPM estimates that consolidating current federal payroll systems would yield savings of approximately $1.1 billion over the next 10 years. These savings would result from reducing operating costs, eliminating duplicative systems investments, and simplifying payroll processing. According to OPM project management documents, major phase one objectives of the initiative include (1) defining governance for the initiative, (2) standardizing payroll policies, (3) establishing an e-Payroll enterprise architecture, and (4) overseeing consolidation of agency payroll operations. The first major project deliverable—establishing governance— was completed in June 2002, as scheduled. The providers have been selected and a migration schedule established for nonselected agencies. However, the other actions have been delayed. Standardization of policies, originally scheduled for completion in June 2002, is currently ongoing. The enterprise architecture planning task and the initial phase of agency consolidations were both scheduled to begin in October 2002 but were not initiated until January 2003. According to the project manager, these schedule deviations have not led to a significant delay in the overall progress of the initiative toward the goal of consolidating the 22 payroll providers to 4 by September 2004. However, migrating the operations of the 18 nonselected providers to the selected providers, which began in February 2003, could pose new challenges, because previously unidentified discrepancies among agency policies may come to light. Geospatial One-Stop is intended to promote coordination of geospatial data collection and maintenance across all levels of government. Geospatial data—data associated with a geographic location—can be analyzed and displayed through geographic information systems (GIS) to aid decision makers at all levels of government. For example, the Department of Health and Human Services uses GIS technology to analyze data on population and topography (including roads, streams, and land elevation) in order to track the spread of environmental contamination through a community. Using the power of GIS to coordinate and integrate disparate kinds of geospatial data can lead to better-informed decisions about public investments in infrastructure and services—including national security, law enforcement, health care, and the environment—as well as a more effective and timely response in emergency situations. The specific objectives of the Geospatial One-Stop initiative include (1) deploying an Internet portal for one-stop access to geospatial data; (2) developing a set of data standards for seven types of geospatial data; (3) creating an inventory of federal data holdings; and (4) encouraging greater coordination among federal, state, and local agencies about existing and planned geospatial data collection projects. The Department of the Interior is the managing partner agency for the initiative. Other federal partners include the Departments of Agriculture, Commerce, Defense, Homeland Security, and Transportation; the Environmental Protection Agency; and the National Aeronautics and Space Administration. Stakeholders include nonpartner federal agencies, the International City/County Management Association, the Intertribal GIS Council, the National Association of State Chief Information Officers, the National States Geographic Information Council, the National Association of Counties, the National League of Cities, and the Western Governors Association. Figure 3 shows the partners and affected parties for the initiative. The Geospatial One-Stop initiative has made progress toward achieving its four objectives. In June 2003, the first publicly available version of the Internet portal was made available online at www.geodata.gov. The portal is intended to serve as a single access point for users seeking links to geospatial data that were previously online but not as easily accessible. The portal was originally scheduled to go online in 2004, based on work being performed by the Open GIS Consortium. However, OMB accelerated this schedule by requiring that the portal be operational by May 2003. In order to have a portal operational within this time frame, the board agreed to turn near-term work over to ESRI, Inc., which developed the portal based on modifications to an existing portal it had built for Interior’s Bureau of Land Management. Project officials now plan to make use of the Open GIS Consortium’s development work to enhance www.geodata.gov in 2004. Regarding the second objective—data standards development—project officials developed draft versions of each of the planned standards on schedule in 2003. In most cases the drafts are simplified version of older standards developed by and for federal agency use. The draft standards were provided for informal public review and comment on the Geospatial One-Stop Web site. By the end of September 2003, project officials had submitted these drafts to the American National Standards Institute, where formal public review will be conducted and the standards will be finalized. Project officials expect the standards to be approved in 2004. Progress in developing an inventory of federal geospatial data holdings— Geospatial One-Stop’s third objective—has been limited. OMB Circular A-11 required that by the end of February 2003, agencies make accessible and searchable for posting on the Internet metadata about all data sets with a replacement cost exceeding $1 million. Potential users of geospatial data sets need metadata to determine whether the data are useful for their purposes and to be aware of any special stipulations about processing and interpreting the data. An initial inventory of 256,000 existing federal data sets was assembled and made available through the Geospatial One-Stop portal when it was implemented in June 2003, and the Geospatial One-Stop Web site provides an online tool to assist agencies in documenting their geospatial metadata. However, the extent to which agencies have met requirements for submitting metadata is unknown. According to the project’s metadata coordinator, agencies may not be aware of their responsibilities for posting metadata about their geospatial submissions. To address this issue, the project team is planning to take steps to improve communication with federal agencies to help ensure that they understand their responsibilities for making geospatial data publicly accessible. To encourage greater coordination among federal, state, and local agencies about existing and planned geospatial data collection projects (the initiative’s fourth objective), an intergovernmental board of directors was established. The purpose of the board is to help ensure collaboration among potential stakeholders from all government sectors. In addition, a Geospatial One-Stop Web site (www.geo-one-stop.gov) was created to provide information about the project, its progress, and its benefits; the project’s management staff and executive director provide briefings across the country to facilitate coordination with states and localities; and an outreach coordinator was appointed to further communication and coordination among partners and stakeholders. The overall goal of the Integrated Acquisition Environment initiative is to create a secure suite of electronic tools to facilitate cost-effective acquisition of goods and services by federal agencies, while eliminating inefficiencies in the current acquisition process. To meet this goal, plans are to (1) consolidate common acquisition functions through a shared services environment; (2) leverage existing acquisition capabilities within agencies to create a simpler, common, integrated business process for buyers and sellers that promotes competition, transparency, and integrity; and (3) develop cross-agency standards to eliminate duplication of effort and redundancy of data. GSA is the managing partner agency. In addition, 31 other federal agencies are considered participating partners in the initiative. Figure 4 shows the partners and affected parties for this initiative. Regarding its first objective of consolidating common acquisition functions through a shared services environment, the project was generally on schedule at the time of our review, although several interim milestones were completed later than scheduled. An example of one of the tasks within this objective is the development of “eMarketplace,” an online capability intended to provide federal customers a single access point to interagency contracts and electronic catalogs for goods and services. In July 2002 an initial operational directory structure for interagency contracts was completed, and in May 2003 the directory was made available online for agencies to populate with their contract data. The development of the directory structure had been scheduled for December 2002, but it was delayed because the approval process required to make changes to federal acquisition regulations was lengthier than had been anticipated. Overall there have been no significant deviations from the planned schedule for tasks within the second objective, leveraging existing agency acquisition capabilities to create a common, integrated business process for buyers and sellers. For example, the Integrated Acquisition Environment’s Business Partner Network, based on the Department of Defense’s Central Contractor Registration system, is intended to provide a single point of registration, validation, and access for grantees, federal entities, and companies seeking to do business with the federal government. Since March 2002, the project team has been working to develop this network to serve as a single source for vendor data for the government, to integrate data with other vendor-based systems in the government, and to establish a process for verifying vendor information with third parties, such as vendors’ Taxpayer Identification Numbers with the Internal Revenue Service. In February 2003, as scheduled, the Business Partner Network completed the development of an online system that allows contractors to enter their representations and certification information once for use on all government contracts. Previously, vendors were required to submit representations and certification individually for each large purchase contract award. Initial work addressing the project’s third objective—developing cross- agency standards to eliminate duplication of effort and redundancy of data—was also on schedule at the time of our review. The standards to be developed under this objective include data elements, business definitions, interfaces, and agency roles and responsibilities regarding government acquisition data. These standards are expected to serve as a foundation for redesigning the current inefficient process of government-to-government transactions by streamlining ordering, billing, and collection and improving reconciliation of intragovernmental transactions. Since March 2002, the project team has been working on the first task of the standards development process—developing a map of current acquisition practices and defining future acquisition processes. According to project managers, the project team completed this task by the end of September 2003. The implementation phase for the Integrated Acquisition Environment project is scheduled for completion by December 2004. While GSA had successfully completed several scheduled milestones at the time of our review, other major tasks lie ahead. These tasks include (1) ensuring that the online directory of contracts is populated and kept up to date, which will require all federal agencies to submit their data into the directory in a standardized format; (2) deploying commercial standards to facilitate interaction among shared acquisition systems, between shared systems and agency systems, and between shared systems and vendor systems; and (3) redesigning and deploying government-to-government transactions, which calls for standard procedures and common data elements to integrate disparate systems and processes across the federal government. The Business Gateway is a cross-agency, intergovernmental effort to create a Web services portal that reduces the burden on small businesses by making it easier for them to find, understand, and comply with governmental laws and regulations. It is intended to provide small businesses with one-stop access to information about federal, state, and local laws and regulations and how to comply with them. More specifically, the Business Gateway is intended to help businesses find information on laws and regulatory requirements, provide assistance through automated tools designed to help businesses understand their regulatory obligations, and transact business by supporting online permit applications and licensing tools. The initiative is focused on four functional areas— environmental protection, workplace health and safety, employment, and taxes—as well as several specific industries, including trucking and mining. SBA is the managing partner agency. Other federal partners include the Environmental Protection Agency; the Department of Labor and its component agency, the Occupational Safety and Health Administration; GSA; the Internal Revenue Service; and the Departments of Transportation, Energy, Interior, and Homeland Security. Nonfederal partners include trade associations and state chief information officers from Washington, Illinois, Georgia, Missouri, Iowa, and New Jersey. Figure 5 shows the partners and affected parties for this initiative. The initiative was originally planned to be implemented in two separate phases. Phase one was to consist of implementing www.BusinessLaw.gov, a Web portal intended to serve as a single place for finding “plain English” legal guides and legal and regulatory information links from all 50 states and compliance assistance in 17 areas, such as workplace safety or environmental protection. This phase was completed when the portal became operational in December 2001. The second phase was to make the portal more interactive and broader in focus. More specifically, phase two objectives included (1) developing a navigation tool known as a “portal maximizer,” intended to enhance access to laws and regulations by helping users to quickly find relevant information from large amounts of data; (2) offering a range of automated compliance assistance tools for specific kinds of regulations, as well as a “profiler” to identify applicable tools; and (3) prototyping a transaction engine for integrated business registration, online licensing, and permitting. Before the project was refocused in July 2003, SBA had made only limited progress toward achieving phase two objectives and was not on track to meet its planned 2003 milestones. A pilot version of the planned portal maximizer had been implemented, but only four of the planned automated compliance assistance tools had been developed. Project plans called for up to 30 additional compliance assistance expert tools to be developed during the second phase of the project. The profiler was also behind schedule, with only mockups of the planned user interface developed. Work on three specialized portals for the trucking, food, and chemical industries was also behind schedule. The project manager attributed the incomplete progress to a funding shortfall within SBA for fiscal year 2003. On July 1, 2003, OMB announced that it was refocusing the project to reduce the paperwork burden on small businesses. The decision was based on the findings of an interagency task force created by OMB in response to requirements of the Small Business Paperwork Relief Act of 2002. In its final report, the task force stated that it believed the initiative showed promise as a means for achieving the purpose of the Small Business Paperwork Relief Act, since it was intended to ultimately provide small businesses a single point of entry for regulatory compliance information. The refocused project is now aimed at creating a gateway for compliance assistance and online transactions that would reduce the paperwork burden through integrated electronic forms. One of the stated goals of the planned gateway is to increase federal agencies’ compliance with the Government Paperwork Elimination Act to at least 75 percent by September 2004. This was to be achieved by creating, with the help of GSA, a central online repository for federal forms and by consolidating information collections and forms with similar data elements. Another goal is to reduce redundant data and the overall number of federal forms by at least 10 percent. According to several participating agency representatives, it is unclear how the change in the project’s focus will affect implementation of the previously planned modules, such as the profiler and the compliance tools. At the time of our review, no decision had been made about what funding or other resources would be made available to continue development efforts that had been previously under way as part of phase two of the project. With the increasing focus on collaboration brought about by the move toward e-government, there has been a need to identify key characteristics that contribute to the success of cross-organizational collaborative e-government projects. Based on a review of government, private sector, and academic research and guidance, we identified five broad key practices that can have a significant impact on the effectiveness of collaboration across disparate organizations. These key collaboration practices could have a significant impact on whether the 25 OMB- sponsored e-government initiatives are successful. Taken as a whole, these factors can provide an interorganizational project team with the fundamentals for an effective collaborative process. Establishing a collaborative management structure. Building a collaborative management structure across participating organizations is an essential foundation for ensuring effective collaboration. According to the literature we reviewed, strong leadership is critical to the success of intergovernmental initiatives. Involvement by leaders from all levels is important for maintaining commitment and keeping a project on track. Defining a comprehensive structure of participants’ roles and responsibilities is also a key factor. For example, according to a 1998 study by the Intergovernmental Advisory Board, a project to develop a nationwide law enforcement information system was successful due to the establishment of a policy board responsible for coordination and partnership within the law enforcement community. The board’s members represented law enforcement organizations at all levels of government, and the board provided a structure and process to ensure a voice for each member of the partnership. Maintaining collaborative relationships. Once a collaborative management structure is in place, well-defined, equitable working relationships must be developed and take root in order to ensure effective ongoing collaboration. Researchers have found that all the partners in a collaborative undertaking need to share a common vision and work in a climate of trust and respect in order to elicit full participation. An important element of establishing effective collaborative relationships is to reach formal agreements with each partner organization on a clear purpose, expected outputs, and realistic performance measures. For example, in an intergovernmental project led by the state of Pennsylvania to enhance its vehicle emissions program, a broad coalition of stakeholder groups representing government, private businesses, and special interest groups were directly involved in selecting a strategy and designing the program. According to a GSA study of the project, the participants worked well together and endorsed the process primarily because all views were considered seriously and many suggestions were incorporated. Contributing resources equitably. The responsibility for meeting a project’s resource requirements needs to be equitably distributed among project participants. In order to facilitate a collaborative environment, each participating organization should contribute resources in the form of human capital or funding to demonstrate its commitment to the success of the project. In addition, formal processes to collect these resources from partner agencies—such as written agreements to document the resource contributions expected from each partner—are useful to support this practice. According to a study performed by the Amherst H. Wilder Foundation, a collaborative group needs to consider the resources of its members. Similarly, partner organizations must be prepared to devote substantial staff hours to the collaborative effort. Facilitating communication and outreach. Another key element of effective collaboration is developing and implementing effective communication and outreach mechanisms. Tools that clearly communicate the project status and needs among all partners should be used continuously, targeting all partner organizations and their key decision makers. In addition, effective outreach mechanisms are important to keep other stakeholders informed who may not be actively involved in developing systems or business processes, and an outreach plan may be needed to specify tasks and mechanisms to help promote interest and participation in the project. For example, while working on a collaborative project to reduce highway fatalities, the Department of Transportation implemented a knowledge-sharing management portal to facilitate the exchange of information and ideas between the Federal Highway Administration and the states. This communication tool proved to be effective in ensuring widespread and frequent communication and was subsequently implemented in other transportation communities. Adopting a common set of standards. Developing a common set of standards that are agreed to and used by all project partners is a key factor for effective collaboration. Such standards provide a basis for more seamless systems, data, and business process integration on collaborative projects, and help to ensure that those systems and processes can work together. Specifically, ensuring that there are processes in place by which project partners can select and agree upon standards and that all partners are adopting them are key factors in establishing these essential common standards. In GSA’s Government Without Boundaries program, which provided a virtual pool of government information and services, all stakeholders agreed to a technical approach for interoperability and implemented a demonstration to prove the concept. These five key practices and their major elements are summarized in table 1. The four initiatives we reviewed have all taken steps to promote collaboration with their partner agencies. However, none of the initiatives has been fully effective in collaborating with important stakeholders. In comparing the four initiatives’ ongoing and planned activities with the key collaboration practices, we identified significant accomplishments as well as shortcomings and potential challenges. For example, regarding two key practices (establishing a collaborative management structure and contributing resources equitably) we found that three of the four initiatives—e-Payroll, Geospatial One-Stop, and Integrated Acquisition Environment—had taken actions that met planned objectives or that stakeholders found to be effective. However, regarding another key practice—facilitating communication and outreach—an equal number (Geospatial One-Stop, Integrated Acquisition Environment, and Business Gateway) had not taken all the steps they could. The four initiatives have all faced short time frames to accomplish their tasks, and they generally have not fully adopted key collaboration practices because of other competing priorities. However, without involving important stakeholders, the initiatives increase the risk that they will not fully achieve their objectives or the broader goals of the President’s management agenda. OPM has taken positive steps to facilitate collaboration among the e-Payroll initiative’s partners, such as (1) establishing a management structure with well-defined partner agency roles and responsibilities and (2) including the four provider agencies in its effort to identify a common set of payroll standards for the federal government. However, OPM has not fully addressed concerns raised as part of the collaborative process, including concerns about potential changes to payroll standards that may be required for the final migration to the two provider partnerships. Interagency collaboration on developing a common set of payroll standards is particularly important because federal agencies operate under a variety of legislative mandates that have complex requirements for payroll processing, all of which must be fully addressed in the new standards. In table 2, we provide an overview of the initiative’s implementation of the key collaboration practices that we identified earlier, followed by a discussion of each of the practices. Establishing a collaborative management structure. OPM has provided guidance to its partner agencies that defines roles and responsibilities and specifies those partners’ responsibilities with respect to their collaborative relationships with their payroll customers. For example, in memorandums of agreement with the four selected payroll providers, OPM defined the structure that would be used to manage the project. The management structure includes the four provider agencies, a payroll advisory council with 11 representatives from different federal agencies, different functional areas (such as human resources, IT, and financial management), and OMB. In addition, OPM developed a plan that outlines the content of service level agreements between payroll providers and their agency clients. According to the plan, such agreements should detail both the scope of client services and performance expectations for the service provider and should specifically address issues such as change management, billing procedures, and support services. Officials from the Department of Agriculture’s National Finance Center and the Department of the Interior’s National Business Center, two of OPM’s four partner agencies, cited this project management approach as successful in promoting collaboration on the e-Payroll project. Maintaining collaborative relationships. OPM has taken steps to develop and maintain collaborative relationships with its partners and other federal stakeholders. OPM established a group with representatives from the four payroll providers, which holds regular meetings to address project status and other initiative issues. Officials from three of the four provider agencies told us that this group has been very effective in affording them the opportunity to discuss common issues and concerns. Specifically, Interior’s National Business Center representative told us that this forum allowed the federal payroll providers to discuss standardizing and implementing two recent governmentwide payroll actions—the initiation of flexible spending accounts (a program of optional pretax health and dependent care savings accounts for federal employees) and a retroactive federal pay raise for the first part of 2003—resulting in a consolidated governmentwide time frame for the availability of these features. In order to elicit full participation, all partners in a collaborative undertaking need to share a common vision and work in a climate of trust and respect. One way to create such an environment is by ensuring that all stakeholder concerns are articulated and fully addressed. However, according to one stakeholder, OPM has not always effectively addressed concerns by agencies being affected by e-Payroll consolidation. Specifically, the director of the Payroll/HR Systems Service at the Department of Veterans Affairs (VA) told us that his department was not allowed enough time to make a complete evaluation of payroll providers before OPM finalized its decision to align the department with the Defense Finance and Accounting Service. VA had advised OPM in writing that it had concerns that needed to be resolved before the selection of a provider was finalized. According to VA projections, migrating to the Defense Finance and Accounting Service would be both costly and inefficient, because VA would have to separate its payroll system from its human resources system. However, OPM’s written responses did not directly address VA’s concerns but instead emphasized that time available to reconsider the decision was short. For example, in a letter dated January 14, 2003, OPM informed VA that a business case justifying VA’s position would have to be prepared and submitted within 2 days. While OPM exercises the ultimate authority in deciding how payroll operations are to be consolidated, it could put e-Payroll’s overall schedule at risk by not fully considering and responding to stakeholder concerns. Contributing resources equitably. OPM has instituted a collaborative strategy for financing the e-Payroll project that includes guidance identifying the responsibilities of partner and other participating agencies for contributing resources for the e-Payroll initiative. For example, OPM’s plan for financing the consolidation of payroll service providers and the migration of agency payroll operations to designated service providers states that the provider agencies are to recover the costs of their operations from fees levied on their customers as defined in service level agreements. In addition, OPM’s plan relied on OMB to apportion funds to the providers for migration expenses by identifying agency funding contributions in fiscal years 2003 and 2004. The intent was to redirect funding that had been planned for upgrades or other payroll system operations and maintenance to support the governmentwide effort. In keeping with this intent, officials from Energy, Health and Human Services, and the Nuclear Regulatory Commission reported that they were using funds earmarked for upgrade and maintenance of payroll systems to finance migration costs. Facilitating communication and outreach. The e-Payroll management team has taken steps to facilitate effective communication of project status and needs. For example, OPM began by inventorying stakeholders to identify those affected by the initiative and then developed a plan for communicating with them. The resulting communications plan identified a variety of methods for conveying project information to affected parties, including direct meetings, workshops, telephone contact, and formal letters to agency heads regarding significant decisions relating to the initiative. OPM also held a governmentwide forum intended to provide information about e-Payroll to agencies and facilitate interaction among the executive branch agencies and the selected providers. In addition, three of the four designated payroll providers reported that attending the quarterly provider conferences and participating in biweekly conference calls sponsored by OPM were effective communications mechanisms. Adopting a common set of standards. Consolidating the existing 22 federal payroll systems into a single system requires that OPM develop a common set of payroll standards that will meet the requirements of multiple federal agencies with different missions and legislated payroll constraints. OPM has taken steps to help ensure that federal agencies have input on development of a common set of standards. For example, OPM commissioned a study to identify significant differences among the payroll processes of the existing 22 providers. Representatives of agencies from a cross section of the executive branch, including all four of OPM’s partners—the selected payroll providers—participated in the study. The resulting 87 payroll standardization opportunities were provided to federal agencies for review and comment. OPM received approximately 250 comments and suggestions for action from federal agencies on the standardization opportunities that it identified. These agencies’ comments show the complexity of the standardization tasks that OPM and its partners have yet to undertake—from proposing new legislation to addressing union negotiations. According to OPM officials, a focus group was established in July 2003 to further analyze the previously identified opportunities and develop recommended solutions. Officials told us that standardizing the payroll process is an ongoing process and that work to develop a single payroll standard would continue with input from other federal agencies. Although OPM has involved its partners and other federal agencies in the process of identifying opportunities for standardization, it still faces the challenging task of getting federal agencies to reach agreement on a single payroll standard that they all can use. As agencies migrate to consolidated payroll providers, changes may need to be made either to the providers’ payroll processes and standards—so that the various payroll mandates can be accommodated—or to the mandated requirements themselves, so that agencies can conform to a single standard. Fully identifying and assessing the impact on agencies of potential payroll standards will be a challenging effort. For example, VA’s Acting Deputy Assistant Secretary for Finance expressed concern that OPM officials might not appreciate the complexities of administering payroll systems under Title 38 of the United States Code—the legislation that governs VA’s payroll processes—and that changes would be necessary to support VA’s payroll processing once it migrates to its new payroll provider. According to an OPM study, in addition to Title 38, there at least 13 other sets of legislated federal payroll provisions that will need to be reviewed and addressed before consolidated federal payroll systems can be implemented. Without effective interagency collaboration, changes mandated by OPM may not fully address agencies’ individual payroll processing requirements, increasing the risk that agencies will not be able to migrate as planned to their new payroll providers. In commenting on a draft of this report, OPM officials stated that they have taken steps to ensure that a collaborative process was in place for payroll standards development, based on establishing a focus group of cross-agency representatives within the Payroll Advisory Council. If supported by a detailed strategy, OPM’s action may help to address this issue. The e-Payroll initiative has achieved initial progress based in part on an effective collaborative management structure and collaborative relationships with its designated payroll providers. However, the issue regarding consideration of VA’s concerns could have an adverse impact on the success of the project as migration of agency payroll operations progresses. Furthermore, unless OPM places increased emphasis on collaboration as governmentwide standards are developed and consolidation of payroll systems progresses, it will be at increased risk that the consolidated systems will not meet the needs of all federal agencies. Ensuring effective collaboration on Geospatial One-Stop is a significant challenge. In addition to the eight federal agencies designated as partners, the project’s stakeholders include thousands of state and local governments, as well as other nonpartner federal agencies. State and local agencies perform key functions in collecting and managing geospatial data—it is estimated that about 90 percent of geospatial data is collected by state and local governments, and that those governments invest over twice as much as the federal government to collect and maintain such data. Consequently, states’ and localities’ participation in the Geospatial One- Stop initiative is critical. Interior has taken steps to include nonfederal stakeholders on the project. For example, it established an intergovernmental management structure, conducted briefings at meetings and conferences across the country to promote stakeholder participation, appointed an outreach coordinator to facilitate communication with stakeholders, and included states and localities in drafting national geospatial data standards. However, given the large number of stakeholders, Interior has not yet ensured that many states and localities are involved in the project. In addition, although Interior has collaborated with its partners and other stakeholders in developing draft geospatial standards, it has not taken steps to ensure that those standards will be used by a majority of the project’s federal, state, or local stakeholders. Table 3 is an overview of the key collaboration practices as implemented by the Geospatial One-Stop initiative, followed by further discussion. Establishing a collaborative management structure. Geospatial One- Stop includes eight federal partners and thousands of other stakeholders—over 3,000 counties, over 18,000 municipalities, and the 50 states, as well as other federal agencies that are not partners on the project. To help ensure that nonfederal stakeholders have a voice in the direction of the project, Interior established an intergovernmental board of directors that votes on significant decisions, such as selection of the portal architecture and establishment of project schedule dates. Two-thirds of the votes are held by state, local, and tribal representatives, and one-third by federal partner agencies. Establishment of the board has worked well to facilitate collaborative intergovernmental management and oversight of the Geospatial One- Stop initiative. For example, at recent board meetings, members discussed issues such as the status of the initiative, standards concerns, and the management structure of the initiative as reflected in its most recent business case. The representative to the board from the National States Geographic Information Council told us that state, county, and municipal levels of government were well represented and played a useful role in providing alternative views about the direction of the initiative. Maintaining collaborative relationships. While Geospatial One-Stop has established a management structure to facilitate collaboration, it has made less progress in defining working relationships among its collaborative partners. One positive step was the development of a charter for the project’s board of directors, which discusses authority, responsibilities, voting procedures, and coordinating mechanisms for the board members. The charter was signed by each of the board’s members. However, at the time of our review, other than this charter, only one memorandum of understanding had been established regarding collaborative relationships—an agreement on coordinating GIS standards related to homeland security, which was signed by the Federal Geographic Data Committee, the U.S. Geological Survey, and the National Imagery and Mapping Agency. Without formal agreements among the Geospatial One-Stop project partners, it may be difficult to sustain a shared vision for the project and ensure that progress is being made toward achieving its objectives. Contributing resources equitably. While Geospatial One-Stop initially had difficulty obtaining resource contributions from federal partner agencies, these early problems have largely been resolved. According to the executive director, partner agencies did not contribute funds for fiscal year 2002 as had been projected in the project’s capital asset plan, even though the agencies had been involved in preparing the plan. Instead, Interior provided all fiscal year 2002 funds for the project. For fiscal year 2003, the capital asset plan estimated that Interior would contribute about $2.2 million, while the other seven partner agencies would contribute the remaining $6.2 million. According to a project official, all agencies have made their planned contributions. The availability of funds from partner agencies in fiscal year 2003 has allowed Geospatial One-Stop to complete several tasks on schedule, such as deploying the initial version of the www.geodata.gov portal and submitting draft national geospatial data standards to the American National Standards Institute. Facilitating communication and outreach. The Geospatial One-Stop project team uses a number of different mechanisms to communicate information about the project to potential stakeholders and the public. For example, the project management team established a Web site that provides information such as minutes of the board of directors meetings, links to partners’ and other stakeholders’ Web sites, geospatial data standards, and the most recent Geospatial One-Stop business case. The executive director and other Geospatial One-Stop project members also provide briefings and question-and-answer sessions at conferences and participate in other forums to provide information about the project to other stakeholders. The project’s executive director attended the midyear meeting of the National States Geographic Information Council, where he provided a briefing and a luncheon talk about Geospatial One-Stop to all attendees and addressed the attendees’ questions and concerns. In addition, the initiative’s project team, in conjunction with the National Association of Counties, the National League of Cities, and the International City/County Managers Association, conducted a survey of local governments to gather information about the extent of respondents’ use of geospatial data and the reasons why such data are not being used more extensively by those governments. Despite these measures, according to state GIS officials the project has not yet gained participation from other governments because they may not perceive it to be beneficial to undertake the effort and expense of documenting and making available local geospatial data for inclusion in the www.geodata.gov portal. For example, the executive director of Vermont’s Center for Geographic Information, Inc., told us that he did not know whether Vermont’s geospatial data holdings were being considered for inclusion in Geospatial One-Stop and that the benefits of participation had not been well communicated. In addition, Montana’s GIS coordinator told us that Montana had not yet committed to participate in the project and that state government officials did not understand the benefits of participating. According to the Geospatial One-Stop Capital Asset Plan, Interior is planning to provide incentives for state, local, and tribal governments to participate, although the project’s executive director told us that carrying out these plans is contingent on approval of funding. Also, in a draft of Interior’s fiscal year 2005 plan, several planned actions to accomplish these tasks have been identified. Planned actions include providing funding to help state, local, and tribal organizations to become more engaged in intergovernmental geospatial activities and establishing a liaison program with funding to local stakeholder associations to work with Geospatial One-Stop and serve as a liaison between federal agencies and those associations. In addition, according to the Geospatial One-Stop outreach coordinator, other efforts not provided in the initiative’s capital asset plans include identifying opportunities to promote geospatial information as part of the state and local government policy efforts and enhance outreach in other areas of the project, such as standards development and management of the portal. However, there are no plans to develop a formal outreach plan for the Geospatial One-Stop initiative. Unless a detailed plan is documented and implemented for conducting effective outreach, state and local geospatial information may remain inaccessible through the Geospatial One-Stop portal, significantly reducing the usefulness of the portal as a central access point for such data. Adopting a common set of standards. Interior has taken steps to collaboratively develop a set of basic standards to support the collection of interoperable geospatial data for the Geospatial One-Stop initiative. Specifically, project participants have drafted standards for seven types of data as well as a base standard, with participants from other federal agencies, states, localities, the private sector, and academia participating in their development. However, participation in the standards-setting process has been limited. Several large nonpartner federal agencies—such as the Departments of Treasury, Justice, and Health and Human Services—were not represented on the standards development effort. In addition, local government representation included only 23 counties and 3 cities. As a result, the risk is substantial that many federal and local stakeholders may not adopt the proposed standards because those standards may not meet their needs. Further, definition of the standards is only the first step in realizing their benefits; Geospatial One-Stop has not addressed the challenge of gaining consistent implementation of the standards across governments—a key factor in effective collaboration. Many states and localities have already established Web sites that provide a variety of location-related information services, such as updated traffic and transportation information, land ownership and tax records, and information on housing for the elderly, using existing commercial products that are already meeting their needs. Hence these organizations are likely to have little incentive to adopt potentially incompatible standards that could require substantial new investments. According to Arizona’s state cartographer, many local governments currently do not comply with existing federal standards because most of their GIS applications were created primarily to meet their internal needs, with little concern for data sharing with federal systems. If designated standards are not widely adopted, geospatial data could continue to be collected in incompatible formats and systems, preventing officials from gaining the benefits of better-informed decisions about public investments in infrastructure and services based on an integrated view of geospatial information. While the Geospatial One-Stop project established a significant collaborative management structure in its broadly representative board of directors, the project has not fully adopted other key collaborative practices. It faces significant challenges in obtaining participation from thousands of potential project stakeholders and obtaining their agreement on and implementation of geospatial data standards. Such participation will be difficult to achieve without a more structured and rigorous outreach effort to involve federal, state, and local government agencies. The General Services Administration has taken steps to ensure that a variety of mechanisms are in place to facilitate collaboration on the Integrated Acquisition Environment initiative. For example, the project team developed a formal charter outlining the objectives, tasks, and roles and responsibilities of project partners, and it is in the process of completing implementation of memorandums of agreement with all participating agencies to further define their roles and financial responsibilities. In addition, GSA has developed a communication strategy for the initiative to help ensure that partners and stakeholders are informed. However, that strategy does not include key financial decision makers throughout the government, although our research shows that such officials should be informed of project status and needs on a continuous basis. Finally, GSA’s plans for developing standards for the federal acquisitions process are in line with the key practices that we identified. Table 4 provides an overview of the initiative’s collaboration practices, followed by further discussion. Establishing a collaborative management structure. The project team established a charter for the Integrated Acquisition Environment initiative that all partners and stakeholders agreed to during the initial phase of the project. According to the project manager, the interagency development of and agreement to the initiative’s charter allowed the project team to collectively establish a common foundation for working collaboratively on the initiative. In addition, the project management team established a structure of subteams responsible for leading development within each of five project modules defined in the charter. The subteams consist of representatives from at least 22 agencies who are tasked with serving as the primary liaisons between their agencies and the project management team. This well-defined subteam structure can contribute to effective collaboration at the working level among the many agencies involved in the project. Further, GSA is in the process of developing a comprehensive change management plan to be completed in early 2004. This plan is to address stakeholder involvement through the use of multi-agency, cross-functional teams at the executive level and collaborative design of the system through business area teams populated with partner agency representatives. Maintaining collaborative relationships. The project management team is in the process of establishing memorandums of agreement with each partner agency; these agreements further define each partner’s role and expected funding contributions. As of September 2003, memorandums of agreement had been signed with 21 agencies, 3 were near completion, and 7 remained to be completed. In addition, GSA officials reported that several collaborative forums for Integrated Acquisition Environment stakeholders were in place. For example, business area teams and project managers hold regular weekly meetings, which serve to reinforce collaborative relationships that cut across organizational boundaries. In addition, an Industry Advisory Board provides industry perspectives on priority needs, requirements, best practices, and trends. Officials from 10 partner and stakeholder agencies that we contacted indicated that the project’s collaboration mechanisms were effective. Contributing resources equitably. To date, the project has been successful in obtaining resource contributions from most of its partner agencies. According to GSA officials, as of September 2003, 94 percent of requested funds had been received. According to the project managers, GSA anticipates that all participating partner agencies will contribute their allotted amounts in fiscal year 2004. Facilitating communication and outreach. The Integrated Acquisition Environment’s project team has taken a number of concrete steps to build communication and outreach among partners and stakeholders. For instance, the team has developed a detailed communication plan that clearly identifies their audience, as well as various communication tactics, such as creating e-mail news updates, participating in “industry days,” meeting with agencies’ senior officials, and contributing content to the press. Project officials also established an online workspace where participants can share information, organize conferences to share information with private industry, and hold regular team meetings. According to comments from several participants and interested parties, these strategies are effective in providing necessary information regarding the initiative. Interior’s deputy assistant secretary for performance and management, for example, noted that these measures have been effective at promoting collaboration by focusing on sharing information and generating agency support for the initiative. However, the project team has not included all stakeholders that it could in its communication and outreach efforts. Specifically, Chief Financial Officers (CFO) of partner and stakeholder agencies, who make key decisions about financial contributions to the initiative, said they had not been included and consequently have not been kept up to date about the objectives and requirements of the initiative. Representatives of the partner agency CFOs provided suggestions that highlighted shortcomings in GSA’s communications with the financial community to date. For example, Treasury’s CFO noted that the specific objectives of the initiative should be communicated to senior financial managers so that they understand how the initiative will support the missions of their organizations. According to the assistant CFO for the Department of Housing and Urban Development, the project team could more effectively reach the financial community by interacting regularly with the federal CFO Council, a mechanism established as a focal point for financial management issues in the federal government. According to the Integrated Acquisition Environment’s project managers, increased support from the CFOs could increase the likelihood of partner agencies contributing funds to the initiative. These officials told us that they are working to better include financial decision makers in future project communications by updating the project’s communication plan to include agencies’ CFOs and coordinating more actively with the CFO council as new project modules are developed. In commenting on a draft of this report, GSA officials stated that GSA has scheduled discussions about the initiative with a cross section of CFOs and plans to invite a representative of the CFO Council to participate in the Integrated Acquisition Environment governance body. However, at the time of our review, these actions had not yet been completed. Without taking such an inclusive approach, the project could be at greater risk of not meeting its objectives due to future funding shortfalls. Adopting a common set of standards. The lack of standardization in government-to-government transactions adds to the complexity and inefficiency of the current process. A primary objective of the Integrated Acquisition Environment initiative is to establish standard data elements, business definitions, interfaces, and roles and responsibilities for government acquisitions. Achieving this objective is likely to be challenging. Once agreed upon, the new standards are expected to streamline the data handling processes, reduce workload, improve billing accuracy, and help enforce data stewardship roles and responsibilities. The project team’s standards development strategy includes obtaining comments from as many affected federal agencies as possible, which is in line with the key collaboration practices that we identified. Having begun by mapping the process currently in place, the project team intends in October 2003 to begin using commercial standards to develop proposed standard interfaces. As proposed standards are developed, the project team plans to distribute them to all members of the federal procurement community—128 agencies—for comment. The process of addressing these comments and reaching final agreement on standards is likely to be challenging, given the number of affected agencies. GSA has adopted a variety of effective collaborative practices that have contributed to progress in advancing the goals of the project. Like the other initiatives, Integrated Acquisition Environment still faces additional challenging tasks, especially in setting standards. Involving agency financial decision makers could help reduce the risk that agencies may not contribute resources in future years. Collaboration on the Business Gateway project is critical at two broad levels. First, several key federal agencies that are responsible for business regulation—such as the Departments of Labor and Transportation and the Environmental Protection Agency—must collaborate to make it easier for businesses to access and comply with their regulations. Second, the Business Gateway project team must collaborate with industry-specific groups that are the subject of business regulation—such as truckers and miners—to ensure that the planned gateway will meet their needs. In specific areas, such as development of the gateway’s profiler module, collaboration has been successful. However, on the whole, SBA’s actions to involve its partners and other stakeholders in the Business Gateway initiative have not addressed many of the areas that we found to be essential to achieving effective collaboration. SBA has not yet taken steps to document project responsibilities in interagency agreements, achieve equitable resource contributions among partners, or provide adequate outreach to partners and potential stakeholders to ensure that they are kept fully informed about the project. Table 5 is an overview of the key collaboration practices as implemented by the Business Gateway initiative, followed by further discussion. Establishing a collaborative management structure. To facilitate collaboration on the Business Gateway initiative, SBA developed a project charter that addresses the goals of the initiative, its benefits, project components, and critical success factors. However, the charter does not define an interagency approach to managing the initiative, discuss participants’ roles and responsibilities, or establish collaborative decision-making processes. According to the Internal Revenue Service’s (IRS) representative to the project, the charter contains no specific assignment of responsibilities—it was developed only to document general support for the concept of the initiative. Without a well-defined decision-making process, including specified roles and responsibilities, designated partner agencies may be unwilling to make significant commitments to supporting the goals and objectives of the initiative. Maintaining collaborative relationships. SBA has not yet established mechanisms to maintain effective relationships with its agency partners or other stakeholders. Although it reached agreements in 2002 with four of its nine federal partner agencies, those agreements specified single, limited-scope project tasks rather than establishing working relationships with a common vision for the initiative. For example, SBA’s memorandum of understanding with IRS was to develop a pilot program under which small businesses could apply for Federal Employer Identification Numbers via the Internet rather than by mail or fax. Similarly, SBA’s agreement with the Occupational Safety and Health Administration was to develop a tool to help small businesses comply with emergency standards. Further, SBA has not yet established formal agreements with organizations that represent small businesses, such as the American Trucking Association, the Owner-Operator Independent Drivers Association, or the National Private Truck Council—all of whom represent the ultimate intended beneficiaries of the initiative’s services. According to the OMB portfolio manager for government-to-business initiatives, the project has not been able to establish formal collaboration agreements because key management components, such as partner agency roles and responsibilities, have not yet been defined. Without well-defined mechanisms for collaboration, the project risks not meeting the needs of partner agencies or gaining their commitment to continue supporting the project. Contributing resources equitably. SBA also has not developed a strategy for sharing resource commitments across its partner agencies. On the contrary, the project manager’s strategy has relied solely on SBA to fund the initiative. According to the OMB government-to-business portfolio manager, SBA’s strategy was to promote collaboration by not burdening potential partners with financial responsibilities for the initiative. However, in taking on all financial responsibility, SBA also took control of decision-making responsibility, which reduced agency collaboration. Officials from designated partner agencies told us that because they did not provide funds for the initiative, they have had little input in the decision-making process and, as a result, do not have a strong incentive to participate in the Business Gateway. Without the involvement of partner agencies, the initiative risks not being able to achieve its broader objective of providing small businesses with a single integrated source for compliance with federal regulations. Facilitating communication and outreach. The Business Gateway initiative has produced examples of effective communication and outreach. For example, SBA designated the Environmental Protection Agency (EPA) to take the lead in developing the profiler, which is intended to gather information about a user’s business (such as type of business, number of employees, and so on) to aid in providing focused assistance. Based on comments from participating agency representatives, EPA has been effective at leading communication and outreach for that task. EPA established a cross-agency workgroup that meets weekly to discuss progress, make decisions, and address the next steps with regard to development of the module. The profiler module workgroup members also routinely coordinate via e-mail and telephone, and EPA communicates updated information on development of the profiler module at projectwide team meetings. Participants in the workgroup told us they found that these meetings and briefings by EPA were an effective means for collaboration. For example, according to the Occupational Safety and Health Administration’s representative on the profiler workgroup, EPA did an excellent job of facilitating consensus as to next steps, specifying what tasks were to be done by participants, following up on performance, and relaying information or requests from SBA. However, despite subgroup examples such as this, communication and outreach by SBA to partners and stakeholders projectwide remain limited, with key decision makers not having access to up-to-date information about the initiative. For example, according to the trucking module leader, key agency decision makers were not involved in meetings, conference calls, and monthly workgroup meetings, and therefore agency participants were limited in their ability to support the initiative because they could not make resource commitments. More specifically, federal agency decision makers were often not present at meetings where decisions, such as those on the costs and schedule, were made for the initiative. As a result, project issues could not be effectively discussed and resolved, slowing progress and hindering collaboration. Adopting a common set of standards. The Business Gateway project team has adopted existing data and technical standards when they were available. For example, the team examined the technical reference model associated with the OMB-sponsored Federal Enterprise Architecture to identify relevant standards and ensure that technical elements of the gateway were compatible with the Federal Enterprise Architecture. In cases where standards were not previously defined, the project team either reached agreement or began a process to reach agreement on ad hoc standards. For example, EPA and the Department of Energy agreed to use the same set of basic key words to direct inquiries by users on topics related to environmental protection regulations. These practices are in line with key practices that we identified for adopting common sets of standards. The collaboration challenges faced by the Business Gateway project may have contributed to the slow progress on recent work. Specifically, the lack of well-defined roles and responsibilities may have inhibited the stakeholder participation necessary to complete tasks on schedule. The lack of shared responsibility for funding the project may have also limited stakeholder commitment. In addition, limited communication and outreach left key partners and stakeholders ill-informed about the initiative’s progress and development issues. Each of the four e-government initiatives has made progress toward achieving its overall objectives. A number of early goals have been achieved, including establishing Web portals such as www.geodata.gov for the Geospatial One-Stop initiative and www.BusinessLaw.gov for the Business Gateway project. All four initiatives rely on cross-agency collaboration, and they still have a number of tasks to complete, some of which require extensive interorganizational cooperation and could be very challenging. In our assessment of previous research into cross-organizational collaboration, five broad key practices emerged as being of critical importance. These practices include establishing a collaborative management structure, maintaining collaborative relationships, contributing resources equitably, facilitating communication and outreach, and adopting a common set of standards. When assessed according to these practices, the record for the four e-government initiatives is mixed. In some cases, the practices were effectively used, whereas in other cases project managers did not take full advantage of them. For example, while OPM has taken steps to promote close collaboration with its four designated e-Payroll providers, it has not fully addressed the concerns of a key stakeholder that may be required to make costly changes to its payroll processes and policies in response to OPM’s decisions. Interior has instituted a board of directors for Geospatial One-Stop that includes certain state and local representatives, but it has not yet established formal agreements with all of its federal partners or developed an outreach plan to encourage a broad range of states and localities to participate in the initiative. GSA has adopted a variety of effective collaboration practices on the Integrated Acquisition Environment project, but it has not yet fully involved CFOs from partner agencies. Finally, SBA has not yet taken important steps—including defining roles and responsibilities, establishing formal agreements with federal partner agencies, and establishing a funding strategy based on shared resource commitments—to facilitate effective collaboration with its partners and stakeholders. Until these issues are addressed, the initiatives may be at risk of not fully achieving their goals. To enhance the effectiveness of collaboration as a tool for the four e-government initiatives to use in achieving their goals, we recommend that the Director of OPM (1) institute a review and feedback process with VA to ensure that its concerns are reviewed and addressed before decisions are made that could have a policy or resource impact on agency payroll operations, and (2) ensure that a collaborative process is in place for development of governmentwide payroll standards; the Secretary of the Interior establish formal agreements with federal agency partners to clarify collaborative relationships and develop an outreach plan for the Geospatial One-Stop initiative that includes specific tasks for contacting and interacting with a wider range of state and local government GIS officials to facilitate and explain the benefits of broad participation in the initiative and promote the use of federal geospatial data standards; the Administrator, GSA, modify the structure of its working groups and other communication mechanisms for the Integrated Acquisition Environment initiative to fully include the CFOs of partner agencies and better ensure that agreed-upon partner resource contributions are made; and the Administrator, SBA, establish a more collaborative management structure for the Business Gateway initiative by defining roles and responsibilities, establishing formal collaboration agreements with federal agency partners, developing a shared funding strategy, and implementing projectwide communication and outreach mechanisms to ensure that key decision makers at partner agencies are kept informed and involved in the management of the project. We received written comments on a draft of this report from the Director of OPM; Interior’s Assistant Secretary Policy, Management and Budget; and SBA’s Program Executive Officer for e-Government. We also received oral comments from the Administrator of GSA. All four agencies generally agreed with our discussion of the collaboration challenges facing e-government initiatives. In addition, each of the agencies provided comments and additional or updated information about collaboration activities associated with their initiatives, as well as technical comments, which have been incorporated into the final report where appropriate. OPM stated that it was concerned with our assessment that e-Payroll had not been fully effective in taking steps to promote collaboration with partner agencies. In the report, we noted that OPM has taken steps to develop and maintain collaborative relationships with its partners and focused our concern on OPM’s relationship with VA. Concerning our recommendation that OPM institute a review and feedback process with VA to ensure that concerns are addressed, OPM reported that such a process has been established and that it would continue to hold discussions with VA. In addition, concerning our recommendation that OPM ensure that a collaborative process is in place for the development of governmentwide payroll standards, we noted in the final report OPM’s position that it has taken steps to help ensure a collaborative standards development process by establishing a cross-agency focus group to address standards setting issues. If supported by a detailed strategy, OPM’s actions may help to address the issues we raised. OPM also provided technical comments, which we have incorporated as appropriate. Interior stated that it agreed with our assessment that e-government projects face many challenges and that Geospatial One-Stop had made substantial progress in achieving its initial objectives and goals. Interior also acknowledged that it had not resolved all the challenges in gaining greater collaboration on the part of the potential stakeholders at the state and local levels. Interior stated that, in several ways, the draft report had mischaracterized the Geospatial One-Stop project as being “federal- centric.” We do not believe that the report characterizes the initiative in this way. Rather, the focus is on the challenge of gaining as broad participation as possible from state and local representatives, a task that Interior agrees is challenging. Interior’s Assistant Secretary, Policy, Management and Budget, also stated that the agency disagreed that the existence of formal agreements is key to sustaining a vision and making progress. However, Interior noted in its comments that it had established memorandums of agreement or funding agreements with each of its partner agencies. Further, our research into key collaboration practices revealed that formal agreements with a clear purpose, common performance outputs, and realistic performance measures are useful in providing a firm management foundation for collaboration. GSA concurred with our recommendation regarding the Integrated Acquisition Environment initiative. GSA provided additional information about its planned activities to address our recommendation as well as updated information about the status of the initiative. This information has been incorporated in the final report as appropriate. SBA provided several suggested technical corrections to the draft report, and we have made those corrections in the final report where appropriate. In its comments, SBA officials stated that the project manager believed that slow progress in 2003 was due primarily to lack of funding from within SBA and the addition of tasks by OMB, rather than to any shortcomings in collaboration, and that efforts at collaboration had been made until funding for the project became problematic. We have clarified in the final report that the funding shortfall was within SBA and not due to a lack of funding contributions from partner agencies. However, as noted in the report, the fact that partner agencies did not share resource commitments for the Business Gateway limited their overall commitment to and involvement in the project, thus putting the project at risk of not meeting its objectives. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this report. At that time, we will send copies to the Ranking Minority Member, House Committee on Government Reform, and the Ranking Minority Member, Subcommittee on Technology, Information Policy, Intergovernmental Relations and the Census. In addition, we will provide copies to the Directors of OMB and OPM, the Secretary of the Interior, and the Administrators of GSA and SBA. Copies will be made available to others on request. In addition, this report will be available at no charge on the GAO Web site at www.gao.gov. If you should have any questions concerning this report, please call me at (202) 512-6240 or send e-mail to [email protected]. Key contributors to this report were Shannin Addison, Neha Bhavsar, Barbara Collier, Felipe Colón, Jr., Larry Crosland, John de Ferrari, and Elizabeth Roach. Our objectives were to assess (1) the progress that has been made to date in implementing the selected initiatives, (2) the major factors that can affect successful collaboration on e-government initiatives, and (3) the extent to which federal agencies and other entities have been collaborating on the selected initiatives. We considered several factors in selecting the four initiatives for our review. These factors included the number of potential collaborating agencies, reported costs of the initiatives, variety among the types initiative categories (i.e., “government to citizen,” “government to business,” “government to government,” “internal efficiency and effectiveness,” and “cross-cutting”), potential cost savings from implementing the initiatives, variety among managing partners, and variety among the kinds of stakeholders. Based on a consideration of these factors, we selected the following four initiatives: e-Payroll, Geospatial One-Stop, Integrated Acquisition Environment, and Business Gateway. To assess the progress of the initiatives, we reviewed capital asset plans and other project documentation, conducted interviews with project officials, and assessed electronic services made available to customers to date. In addition to determining the status of planned milestones, we evaluated the progress that had been made in achieving the overall objectives of each initiative within the framework of the e-government strategy of the Office of Management and Budget (OMB). To identify key practices affecting collaboration on e-government initiatives, we developed criteria through a review of government, academic, and private sector literature on interorganizational collaboration. We provided these criteria to officials of OMB’s Office of Information and Regulatory Affairs, who agreed that the criteria were reasonable for assessing collaboration on e-government initiatives. Based on these criteria, we summarized individual key practices (i.e., those practices that were most commonly cited among our sources) into five broad practices: establishing a collaborative management structure, maintaining collaborative relationships, contributing resources equitably, facilitating communication and outreach, and reaching agreement on a common set of standards. To assess the extent to which federal agencies and other entities were collaborating on the selected e-government initiatives, we reviewed project documents related to collaboration, such as communication strategies and memorandums of understanding. We conducted interviews with project managers for each of the initiatives we reviewed, as well as with officials from the four managing partner agencies and OMB’s portfolio managers, to determine collaborative management practices that were in place. We also contacted project officials from the initiatives’ partner agencies, as well as the National States Geographic Information Council (regarding Geospatial One-Stop) and representatives from small business associations (regarding Business Gateway). We collected information from these entities to determine the extent to which key collaboration practices were being used effectively for the four initiatives we studied. Our work was conducted from December 2002 to September 2003 in accordance with generally accepted government auditing standards. Following are the source documents that we consulted in identifying the key collaboration practices described in the body of the report. Program Evaluation: An Evaluation Culture and Collaborative Partnerships Help Build Agency Capacity. GAO-03-454. Washington, D.C.: May 2, 2003. Results-Oriented Management: Agency Crosscutting Actions and Plans in Drug Control, Family Poverty, Financial Institution Regulation, and Public Health Systems. GAO-03-320. Washington, D.C.: December 20, 2002. Results-Oriented Management: Agency Crosscutting Actions and Plans in Border Control, Flood Mitigation and Insurance, Wetlands, and Wildland Fire Management. GAO-03-321. Washington, D.C.: December 20, 2002. September 11: More Effective Collaboration Could Enhance Charitable Organizations’ Contributions in Disasters. GAO-03-259. Washington, D.C.: December 19, 2002. At-Risk Youth: School-Community Collaborations Focus on Improving Student Outcomes. GAO-01-66. Washington, D.C.: October 10, 2000. Head Start and Even Start: Greater Collaboration Needed on Measures of Adult Education and Literacy. GAO-02-348. Washington, D.C.: March 29, 2002. Human Services Integration: Results of a GAO Cosponsored Conference on Modernizing Information Systems. GAO-02-121. Washington, D.C.: January 31, 2002. Defense Health Care: Collaboration and Criteria Needed for Sizing Graduate Medical Education. GAO/HEHS-98-121. Washington, D.C.: April 29, 1998. Federal Agency Studies Congressional Research Service, The Library of Congress. Federal Interagency Coordination Mechanisms: Varied Types and Numerous Devices. July 22, 2002. http://www.congress.gov/erp/rl/pdf/RL31357.pdf (viewed July 2003). Federal Enterprise Architecture Working Group. E-Gov Enterprise Architecture Guidance. Draft-Version 2.0. July 25, 2002. http://www.feapmo.gov/resources/E-Gov_Guidance_Final_Draft_v2.0.pdf (viewed July 2003). Federal Highway Administration, Office of Travel Management, Office of Operations (Department of Transportation). The Practice of Regional Transportation Operations Collaboration and Coordination. May 7, 2003. www.ops.fhwa.dot.gov/RegionalTransOpsCollaboration/note.htm (viewed August 2003). Food and Drug Administration (Department of Health and Human Services). An Agency Resource for Effective Collaborations: The Leveraging Handbook. June 2003. www.fda.gov/oc/leveraging/handbook.pdf (viewed July 2003). General Services Administration. Building Blocks for Successful Intergovernmental Programs. August 29, 2001. www.gsa.gov/Portal/content/pubs_content.jsp?contentOID=119122&conte ntType=1008 (viewed July 2003). Hodges, S., T. Nesman, and M. Hernandez. Promising Practices: Building Collaboration in Systems of Care. A special report prepared at the request of the Department of Health and Human Services. 1999. www.mentalhealth.org/cmhs/ChildrensCampaign/PDFs/1998monographs/v ol6.pdf (viewed July 2003). Institute for Educational Leadership. Building Effective Community Partnerships. A special report prepared at the request of the Office of Juvenile Justice and Delinquency Prevention, Office of Justice Programs, U.S. Department of Justice. www.ojjdp.ncjrs.org/resources/files/toolkit1final.pdf (viewed July 2003). Intergovernmental Advisory Board (General Services Administration). Federal, State and Local Government Experiences: Foundations for Successful Intergovernmental Management. October 1998. www.gsa.gov/cm_attachments/GSA_PUBLICATIONS/Main_8_R2AV262_0Z 5RDZ-i34K-pR.doc (viewed July 2003). Joint Chiefs of Staff (Department of Defense). Concept for Future Joint Operations: Expanding Joint Vision 2010. May 1997. www.dtic.mil/jointvision/history/cfjoprn1.pdf (viewed July 2003). Joint History Office, Joint Chiefs of Staff (Department of Defense). The History of the Unified Command Plan 1946–1993. February 1995. www.dtic.mil/doctrine/jel/history/ucp.pdf (viewed July 2003). National Highway Traffic Safety Administration (Department of Transportation). Keys to Success: State Highway Safety and EMS Agencies Working Together to Improve Public Health. August 2000. www.nhtsa.dot.gov/people/injury/ems/pub3/index.htm (viewed July 2003). Office of Intergovernmental Solutions, General Services Administration, Government Without Boundaries: A Management Approach to Intergovernmental Programs (May 23, 2002). Office of Regulatory Affairs, Food and Drug Administration (Department of Health and Human Services). Partnership Agreements. October 2002. www.nhtsa.dot.gov/people/injury/ems/pub3/index.htm (viewed July 2003). Rinehard, Tammy A., Anna T. Laszlo, and Gwen O. Briscoe. Collaboration Toolkit: How to Build, Fix, and Sustain Productive Partnerships. A special report prepared at the request of U.S. Department of Justice, Office of Community Oriented Policing Services. 2001. www.cops.usdoj.gov/default.asp?item=344 (viewed July 2003). Biedell, Jeff, David Evans, Daniela Ionova-Swider, Jonathan Littlefield, John Mulligan, and Je Ryong Oh. Facilitating Cross Agency Collaboration. Smith School of Business, University of Maryland. December 2001. www.estrategy.gov/documents/fall_report-collaboration_121101.pdf (viewed July 2003). Center for Technology in Government. Tying a Sensible Knot: Best Practices in State-Local Information Systems, Executive Briefing, 2001. University at Albany/SUNY. Collaboration: Because It’s Good for Children and Families: A Wisconsin Resource Manual. www.collaboratingpartners.com/CollabManDemo.pdf (viewed August 2003). Dawes, Sharon S., Theresa A. Pardo, David R. Connelly, Darryl F. Green, and Claire R. McInerney. Partners in State and Local Information Systems: Lessons from the Field. Center for Technology in Government. University at Albany/SUNY. October 1997. www.ctg.albany.edu/publications/reports/partners_in_sli/partners_in_sli.p df (viewed July 2003). Industry Advisory Council. Cross-Jurisdictional Government Implementations. September 2002. www.iaconline.org/pdfs/X- Juris_eGov.pdf (viewed July 2003). La Vigne, Mark, David R. Connelly, Donna S. Canestraro, and Theresa A. Pardo. Reassessing New York: A Collaborative Process. Center for Technology in Government. University at Albany/SUNY. June 2000. www.ctg.albany.edu/publications/reports/reassessing_ny/reassessing_ny.p df (viewed July 2003). Treasury Board of Canada. The Federal Government as “Partner”: Six Steps to Successful Collaboration. November 1995. www.tbs- sct.gc.ca/pubs_pol/opepubs/TB_O3/dwnld/fgpe_e.rtf (viewed July 2003). UK Department for Transport, Local Government and the Regions (DTLR) and JSS Pinnacle (now Pinnacle psg). Partnership: A Working Definition. Partnership Series, Paper Number 1. October 1998 www.pinnacle- psg.com/documents/consultancy/so_consultancy_publications_detr_paper 1.pdf (viewed July 2003). Axner, Marya, and Bill Berkowitz. Promoting Coordination, Cooperative Agreements and Collaborative Agreements Among Agencies. Community Tool Box. University of Kansas. ctb.ukans.edu/tools/en/sub_section_main_1229.htm (viewed July 2003). Bailey, Darlyne, and Kelley McNally Koney. Interorganizational Community Based Collaboration: A Strategic Response to Shape the Social Work Agenda. Social Work, Volume 41, Issue 6, 1996. Bardach, Eugene. Getting Agencies to Work Together: The Practice and Theory of Managerial Craftsmanship. Brookings Institution Press, 1998. Baum, C., and A. Di Maio. Sharing Risk: Government/Business Partnerships. Gartner (www.gartner.com), October 25, 2002. Cameron, Marsaili, and Steve Cranfield. Unlocking the Potential: Effective Partnerships for Improving Health. NHS-Executive North Thames, September 1998. www.doh.gov.uk/pub/docs/doh/unlomain.pdf (viewed July 2003). Chrislip, David D., and Carl E. Larson. Collaborative Leadership: How Citizens and Civic Leaders Can Make a Difference. San Francisco: Jossey-Bass Publishers, 1994. Gray, Barbara, and Eric Trist. Collaborating: Finding Common Ground for Multiparty Problems. San Francisco: Jossey-Bass Publishers, 1989. Keller, B. Breaking Down the Walls: Collaboration in the Public Sector. Gartner (www.gartner.com), October 5, 2001. Keller, B., F. Caldwell, and C. Baum. Mr. President, Take Down Those E-Government Roadblocks. Gartner (www.gartner.com), March 2, 2001. Mahoney, J. Public Sector: Beware of Incompatible Partners. Gartner (www.gartner.com), September 18, 2002. Mattessich, Paul W., Marta Murray-Close, and Barbara R. Monsey. Collaboration: What Makes IT Work, 2nd ed. Saint Paul, Minnesota: Wilder Publishing Center, 2001. Peterson, K. Determining Your Role in C-Commerce Relationships. Gartner (www.gartner.com), October 12, 2001. Phelan, P. Implementing Best Practices for Collaborative Processes. Gartner (www.gartner.com), October 22, 2002. Scardino, L., and G. Kreizman. Innovation Funds: A Model for E-Government. Gartner (www.gartner.com), February 16, 2001. Schumaker, Alice, B. J. Reed, and Sara Woods. “Collaborative Models for Metropolitan University Outreach: The Omaha Experience.” Cityscape: A Journal of Policy Development and Research, Volume 5, Number 1, 2000. Smith, Alan. Collaboration between Educational Institutions: Can Various Individual Successes Translate into a Broad Range of Sustained Partnerships? University of Southern Queensland. www.com.unisa.edu.au/cccc/papers/refereed/paper44/Paper44-1.htm (viewed July 2003). University of Vermont. Strengthening Community Collaborations: Essentials for Success. crs.uvm.edu/nnco/cd/collabh3.htm (viewed July 2003). The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
In accordance with the President's management agenda, the Office of Management and Budget has sponsored initiatives to promote expansion of electronic government--the use of information technology, particularly Web-based Internet applications, to enhance government services. Each initiative demands a high degree of collaboration among organizations. For four of these initiatives, GAO was asked to determine, among other things, their implementation progress and the extent of collaboration among agencies and other parties involved. All four of the e-government initiatives that GAO reviewed have made progress in meeting the objectives and milestones of their early phases. Two of the initiatives have established Web portals--www.geodata.gov for the Geospatial One-Stop initiative and www.BusinessLaw.gov for the Business Gateway. The projects face additional challenging tasks, such as e-Payroll's objective of establishing governmentwide payroll processing standards and Geospatial One-Stop's goal of compiling a comprehensive inventory of geospatial data holdings. All four initiatives have taken steps to promote collaboration with their partner agencies, but none has been fully effective in involving all important stakeholders. For example, for the e-Payroll initiative, the Office of Personnel Management has taken steps to promote close collaboration with its four designated e-Payroll providers, but has not addressed the concerns of a key stakeholder that will be required to make changes to its payroll processes and policies. For Geospatial One-Stop, Interior has established a board of directors with broad representation, but has not taken steps to ensure that key stakeholders at the state and local levels are involved in the initiative. For the Integrated Acquisition Environment initiative, the General Services Administration is using a variety of tools to promote collaboration, but has not involved partner agencies' chief financial officers (CFOs). Finally, for the Business Gateway, the Small Business Administration has not taken key steps to facilitate effective collaboration with its partners and stakeholders, such as establishing a collaborative decision-making process and reaching formal agreements on partner roles and responsibilities. All four initiatives have faced short time frames to accomplish their major tasks, so that competing priorities have sometimes hindered full collaboration. However, without effective collaboration on the tasks that remain to be completed, these initiatives may be at risk of not fully achieving their objectives or the broader goals of the President's management agenda.
Laws enacted since 2006 have directed CMS to collect performance information on providers and eventually reward quality and efficiency of care rather than reimburse for volume of services alone. The Tax Relief and Health Care Act of 2006 required the establishment of the Physician Quality Reporting System (PQRS) to encourage physicians to successfully report data needed for certain quality measures. PQRS applies payment adjustments to promote reporting by eligible Medicare professionals (EP)—including physicians, nurses, physical therapists, and others. In 2013, EPs could report data to PQRS using claims, electronic health records (EHR) or a qualified registry, or opt for CMS to calculate quality measures using administrative claims data. Under its group practice reporting option, CMS allows EPs to report to PQRS as a group, either through a registry or a web-based interface. The Medicare Improvements for Patients and Providers Act of 2008 established the Physician Feedback Program, under which CMS was required, beginning in 2009, to distribute confidential feedback reports, known as Quality and Resource Use Reports (QRUR), to show physicians their performance on quality and cost measures. The Patient Protection and Affordable Care Act required HHS to coordinate the Physician Feedback Program with a Value Modifier (VM) that will adjust fee-for-service (FFS) physician payments for the relative quality and cost of care provided to beneficiaries. In implementing the VM, CMS’s Center for Medicare intends to use PQRS and cost data from groups of EPs defined at the taxpayer identification number level to calculate the VM and then report the payment adjustments in the QRURs. As required in the act, CMS plans to apply the VM first to select physicians in 2015 and to all physicians in 2017. As required by law, CMS implemented a performance feedback program for Medicare physicians, which serves as the basis for eventual payment adjustments. (See fig. 1.) In our December 2012 report on physician payment incentives in the VM program, we found that CMS had yet to develop a method of reliably measuring the performance of physicians in small practices, that CMS planned to reward high performers and penalize poor performers using absolute performance benchmarks, and that CMS intended to annually adjust payments 1 year after the performance measurement period ends. We recommended that CMS develop a strategy to reliably measure the performance of small physician practices, develop benchmarks that reward physicians for improvement as well as for meeting absolute performance benchmarks, and make the VM adjustments more timely, to better reflect recent physician performance. CMS agreed with our recommendations, but noted that it was too early to fully implement these changes. Private entities we reviewed provided feedback mostly to groups of primary care physicians practicing within newer delivery models. Each entity decided which measures to report and which performance benchmarks to use, leading to differences in report content across entities. Largely relying on claims data, health insurers spent from 4 to 6 months to produce the annual reports. To meet the information needs of physicians, they all provided feedback throughout the year. The entities also generally offered additional report detail and other resources to help physicians improve their performance. The private entities in our review had discretion in determining the number and type of physicians to be included in their performance reporting initiatives, and their feedback programs generally included only physician groups participating in newer delivery models—medical homes and ACOs—with which they contract. Within this set of providers, the entities used various approaches to further narrow the physician groups selected to receive performance feedback. For example, one entity told us that only physician groups accredited by a national organization focused on quality were eligible for participation in its medical home program, which included physician feedback reports. Private entities’ feedback programs were generally directed toward primary care physician practices. One entity defined primary care as family medicine, internal medicine, geriatrics, and pediatrics; and included data on the services furnished by nurse practitioners and physician assistants in its medical group reports. The entities indicated that they rarely provided reports directly to specialty care physician groups. Among those that did, the programs typically focused on practice areas considered significant cost drivers—obstetrics/gynecology, cardiology, and orthopedics. Entities further limited their physician feedback programs to groups participating in medical homes with a sufficient number of attributed enrollees to ensure the reliability of the reported measures. In medical home models, enrollees are attributed to a physician (or physicians) responsible for their care, who is held accountable for the quality and cost of care, regardless of by whom or where the services are provided. Among those entities we spoke with, the minimum enrollment size for feedback reporting varied widely, with most requiring a minimum of between 200 and 1,000 attributed enrollees to participate in the program. For example, one entity had two levels of reporting in its medical home program, differentiated by the number of attributed enrollees. In one medical home model, the entity required more than 2,000 attributed enrollees for participation and rewarded the practices through shared savings. In a second medical home model, the entity included practices with fewer than 1,000 attributed enrollees, but these practices did not share in any savings. According to the entities in our study, small physician practices (including solo practitioners) typically received performance reports for quality improvement purposes only. Because smaller practices may not meet minimum enrollment requirements needed for valid measurement, private entities generally did not link their performance results to payment or use them for other purposes. For example, one entity provided feedback to practices of one to three primary care physicians upon request, but did not publicly report these practices’ data on its website. To increase the volume of patient data needed for reliable reporting, some entities pooled data from several small groups and solo practitioners and issued aggregate reports for those small practices. Most of the entities that used this method said they applied their discretion in forming these “virtual” provider groups; however, another entity commented that allowing small practices to voluntarily form such groups for measurement purposes would be advantageous. Because each private entity in our study determined the number and types of measures on which it evaluated physician performance, the measures used in each feedback program differed. Each entity decided on quality measures to include, and many also identified utilization or cost measures for inclusion.ACOs to choose 8 to 10 measures from among a set of about 18 measures. To assess physicians’ quality and utilization/cost results, the entities used absolute or relative performance benchmarks. Private entities generally report on physician quality using many more process of care measures than outcomes of care measures. Entities in our review commonly included indicators of clinical care in areas such as diabetes care, cardiovascular health, and prevention or screening services for both their adult and pediatric patients. The most common measure reported by all entities was breast cancer screening, followed by hemoglobin A1C measures, a service used to monitor diabetes. We found wide variation in the number and type of measures in private entities’ quality measure sets. The total number of quality measures used in the feedback reports ranged from 14 to 51. Measures typically fell into one of several measurement areas, each with as few as one or as many as 20 individual measures. For example, in the quality measurement areas for pulmonary and respiratory conditions, one private entity reported on a single measure (appropriate use of medications for asthma), while another reported three measures (appropriate use of medications for asthma, appropriate testing for pharyngitis, and avoidance of antibiotic treatment for adults with acute bronchitis). Although primarily focused on clinical quality measures, entities also included nonclinical measures, such as patient safety and patient satisfaction. (See app. II for more information on the number and types of quality measures included in sample reports provided by the entities we reviewed.) Even when entities appeared to report on similar types of measures in common areas, we found considerable variability in each measure’s definition and specification. For this reason, results shown in physician feedback reports may not be comparable across entities. As shown in figure 2, the diabetes hemoglobin A1C measure was defined and used in different ways in our selected entities’ reports. In some cases, entities calculated the percentage of enrollees with diabetes within a certain age range that received the test. In other cases, the entities calculated the percentage of enrollees with diabetes within a certain age range that had either good or poor control of the condition, as determined from a specified hemoglobin A1C result. In addition, some entities defined their diabetic patient population as enrollees from 18 to 75 years of age, while another did not indicate the age range, and one entity set the age range from 18 to 64 years of age. Some, but not all, private entities in our review included utilization or cost measures in their performance reports to physicians. Total cost of care per enrollee was the most commonly used measure, but cost measures disaggregated by type of service—facility, pharmacy, primary care physician, and specialty—were also used. Some entities described how they limited their reporting of a total cost of care measure to those medical groups with a large number of enrollees. In one case the minimum enrollment size was 20,000 enrollees and in another it was 2,500 enrollees. Officials from one entity also told us that they allowed smaller physician practices to combine their data in order to meet the required number of enrollees for receiving feedback on cost of care. In addition to feedback on the total cost of care per enrollee, some reports given to groups of primary care physicians contained information on the cost of care provided by specialists in the entity’s network. For example, one entity provided trend data that included the number of specialist visits (total and by type) and the number of patients with one or more visits for these specialty areas. (See fig. 3.) For the two specialties with the most enrollee visits during the measurement period—orthopedic surgery and dermatology—the entity also provided the medical group with data on which specialists were seen most frequently and their cost per visit. This information was intended to encourage cost-efficient referrals. Another entity said it was focused on a program in July 2013 to provide feedback to primary care physicians on cardiologists’ performance showing where care was being delivered most efficiently. By providing such information, the entity expected primary care physicians to take cost differences into account when making referrals, rather than basing referrals solely on historical habits. Disseminating information to primary care physicians about the relative cost of specialty care providers is a key aspect of medical home and ACO programs. The entities were fairly consistent in the number and types of utilization measures they selected for feedback reporting. The most common utilization measures reported by our private entities were physicians’ generic drug prescribing rates, followed by emergency department visits, inpatient visits, hospital readmissions, and specialist visits. One entity provided additional detail under the emergency department visits measure to show the number of patients that repeatedly seek care at emergency departments. Officials from the entity told us that this measure was included to alert physicians of potentially avoidable hospital visits so that they can encourage patients to use office-based care before seeking care in more costly settings. (See examples of this measure as presented by private entities in their sample reports in fig. 4.) To evaluate physician performance, the selected private entities compared the measures data to different types of benchmarks. Some entities compared each physician group’s performance results to that of a peer group (e.g., others in the entity’s network or others in the collaborative’s state or region); some entities compared physician groups’ results to a pre-established target; and others gauged physician groups’ progress relative to their past performance. (See fig. 5.) Entities generally used two or three such benchmarks in their feedback reports. For example, one entity separately displayed results for the medical home’s commercially insured, Medicare insured, and composite patient population. Within each of these population groups, it compared the practice’s performance to the average for nonmedical home practices, as well as to the practice’s performance in the prior measurement year. The entity also gave narrative detail to indicate favorable or unfavorable performance. The most common benchmark for the entities in our study was a physician group’s performance relative to the previous measurement period. However, some entities used this benchmark only for utilization/cost measures and not for quality measures. Private entity officials told us they relied on claims as their primary data source for performance reporting. However, several private entities noted shortcomings in relying solely on claims data—the billing codes that describe a patient’s diagnoses, procedures, and medications—for performance reporting. Some entities supplemented their claims data by obtaining information from EHRs, patient satisfaction surveys, or chart extractions. Entities noted that using EHR data was resource-intensive for both providers and payers, because they depended on physician groups to submit the information. The entities we spoke to have had limited success in using EHR data as a primary data source, although many saw it as complementary to claims data. Another entity supplemented its claims data with data from registries that compile information from administrative data sets, patient medical records, and patient surveys, and thus have the capacity to track trends in quality over time. The health insurers in our review typically spent from 4 to 6 months to produce and distribute annual performance reports; in contrast, the health care collaboratives spent 9 to 10 months. (See illustrations of these timelines in fig. 6.) As is common in the health insurance industry, payers require a 3-month interval after the performance period ends—referred to as the claims run-out—to allow claims for the services furnished late in the measurement period to be submitted and adjudicated for the report. The claims run-out was followed by 1 to 3 months to prepare the data, a period that allowed for provider attribution, risk-adjustment, measure calculation, and quality assurance.collaborative stated that the quality assurance process is helpful in increasing physician trust because the group is able to compare its own data with the collaborative’s data before results are final. The statewide health care collaboratives we spoke with required additional time to collect and aggregate data from multiple health insurers, and their final reports were issued at least 9 months after the end of the performance period. The time needed for some or all of these report production steps varied depending on the entity and the types of measures included. Collaboratives often used all-payer claims databases—centralized data collection where each payer submits claims data on that state’s health care providers—for aggregate reporting to providers. Officials from entities told us that all-payer claims databases are helpful because they provide physicians with a better picture of their entire patient panel, not just results determined by individual payers for limited sets of patients. One entity noted that it aggregates its quality data with other payers in its commercial market through a statewide organization, and no one payer can provide statistically meaningful data to a physician group on its own. Officials from one entity with all-payer claims database experience told us that the addition of Medicare data into these databases would improve the information available for measurement and feedback. In addition, one entity suggested that a multipayer database could help with feedback to physicians in groups of all sizes, including small practices, because the higher number of patients would generate sufficient data for calculating reliable measures. However, one entity acknowledged that using all-payer databases requires more time for merging data from different payers in different formats, and another entity noted the challenges of customizing reports for each medical groups’ patient population. Private entities told us that physicians valued frequent feedback on their performance so that they have time to make practice changes that may result in better performance by the end of the measurement period. In response, these entities typically provided feedback reports on an interim basis throughout the measurement period. Interim reports typically covered a 1-year performance period, and were commonly issued on a rolling monthly, quarterly, or semiannual schedule. Entities also noted that frequent reporting throughout the period updated physicians on their performance so that year-end results were better expected and understood. Some entities in our study elected to issue interim reports that build up to the 12-month performance period by continually adding data from month to month. Those that used preliminary data that may not account for all final claims in building reports told us that such data starts to become useful about 3 to 6 months into the performance year. They also stated that, although the interim reports may be limited by the use of rolling or incomplete data, providers generally seek this information for early identification of gaps in care. Private entities generally offered additional report detail intended to enhance physicians’ understanding of the information contained in their reports or in response to physician requests for more data. Private entity officials told us that, because physicians prefer dynamic reports with as much detail as possible, they generally sent reports that can be expanded to show individual physician or patient-level data. Some entities formatted their reports to include summary-level information on quality and cost measures in labeled sections, with supplemental information following the summary data. Other entities provided additional reports or supplemental data through a web portal that allowed providers to see individual physician or patient-level detail. Private entities sent reports in multiple file formats, such as in a spreadsheet, some of which allowed report recipients to sort their data. Entities in our study also offered resources designed to assist physician groups with actionable steps they can take to improve in the next performance period. Most entities told us they offered resources to physician groups, such as consultations with quality improvement professionals, forums for information-sharing, and documents on best practices. For example, one entity’s staff worked directly with practices to improve their results by distributing improvement guidelines for each performance measure included in the feedback report. In addition, the entity’s officials told us they also convened workgroups to review trend information and paid particular attention to differences between medical homes and nonmedical homes. CMS has provided feedback to increasing numbers of physician practices each year in order to eventually reach all physicians. Each medical group’s chosen method of quality data submission determined the quality measures included in its report, to which CMS added health care costs and certain outcomes measures. CMS’s report generation process took slightly longer than that of most private entities in our study, and the agency did not provide interim performance data during the measurement period. CMS feedback reports have included information to assist providers in interpreting their performance results. Unlike the private entities we contacted, which selected a limited set of physicians to receive feedback reports, CMS is mandated to apply the VM to all physicians by 2017. Therefore, the agency faces certain challenges not faced by private entities as it has expanded its feedback program to reach increasing numbers of physicians. In preparation for implementation of the VM, CMS provided performance reports to nearly 4,000 medical groups in September 2013. In 2014, CMS plans to disseminate reports to physicians in practices of all sizes. As of September 2013, CMS had not yet determined how to report to smaller groups and physicians in solo practices. According to CMS, the decision not to present VM information to smaller groups stemmed from concerns regarding untested cost metrics and administrative complexity. CMS agreed with a 2012 GAO recommendation to develop a strategy to reliably measure the performance of solo and small physician practices, but has not yet finalized such a strategy. Under the CMS approach to performance reporting, the content of feedback reports related to quality measures may vary across providers. Unlike our selected private entities, the agency has allowed physician groups to select the method by which they will submit quality-of-care data, which, in turn, determines the measures on which they receive feedback. CMS used claims data for a consistent set of measures in all of its feedback reports for performance on cost and outcomes. For the CMS 2013 reports, medical groups submitted data on quality measures to CMS via a web interface or through a qualified registry; if a group did not select either of these options, the agency calculated quality measures based on claims data. Both CMS and private entities focused on preventive care and management of specific diseases. Web interface. Quality measures under this method pertain to care coordination, disease management, and preventive services. CMS required groups reporting via the web interface to submit data on 17 quality measures—such as hemoglobin A1C levels for control of diabetes—for a patient sample of at least 218 beneficiaries. Registries. Some groups submitted data for quality measures via qualified registries—independent organizations, typically serving a particular medical specialty, that collect and report these data to CMS. CMS required groups reporting to a qualified registry to submit at least three measures—such as whether cardiac rehabilitation patients were referred to a prevention program—for at least 80 percent of patients. Administrative claims. As a default, if a group did not report via web interface or qualified registry, CMS calculated quality measures using claims data. In September 2013, the majority of groups with 25 or more EPs—nearly 90 percent—received quality scores based on claims data. CMS calculated performance on a set of 17 quality indicators, including several composite measures. For example, the diabetes composite measure included several different measures of diabetes control. Regardless of the method a group selected to submit quality-of-care data, CMS used claims to calculate three outcomes measures—two ambulatory care composite measures and hospital readmission. One ambulatory care composite included hospitalization rates for three acute conditions: bacterial pneumonia, urinary tract infections, and dehydration. Another composite included hospitalization rates for three chronic conditions: diabetes, chronic obstructive pulmonary disease (COPD), and heart failure. CMS included cost measures—several of which differed from the measures private entities in our study reported to physicians—in all 2013 feedback reports (see fig. 7). Using claims data, CMS calculated an overall measure of the cost of care as the total per capita costs for all beneficiaries attributed to each physician group.separately reported total per capita costs for attributed beneficiaries with any of four chronic conditions: diabetes, heart failure, COPD, or coronary artery disease. This contrasts with the private entities that typically measured a more limited set of measures focused on physicians’ generic drug prescribing rates and hospital utilization. CMS’s report generation process took longer than that of most private entities in our study because it required more steps. While most health insurers generated performance reports in 4 to 6 months, CMS issued reports about 9 months after the end of the January to December 2012 reporting period. To produce its 2013 physician feedback reports using administrative claims, CMS began with the standard claims run-out period followed by intervals for provider attribution, measure calculation, risk-adjustment, and quality assurance. (See fig. 10.) CMS officials said they allowed a 3-month run-out interval to account for providers’ late-year claims submissions. After the run-out period, CMS required 5 to 6 months for a series of additional tasks needed to prepare the data for reporting. For groups that submitted data to CMS via the web interface or registry options, CMS gave these groups 3 months to submit such data after the end of the 12-month performance period. CMS then calculated the measures for these options over a period of the next several months. Although FFS beneficiaries see multiple physicians, CMS attributed each beneficiary to a single medical group through its yearly attribution process. It used the claims for the 12-month reporting period to determine which groups provided the beneficiary the most primary care and then assigned responsibility for performance on quality and cost measures to that group. Following attribution, the agency risk-adjusted the cost measures to account for differences in beneficiary characteristics and complexity, and standardized the cost measures by removing all geographic payment adjustments. Finally, CMS officials said they performed data checks to ensure accuracy before the reports were disseminated. According to health insurers and collaboratives, physicians find that frequent feedback enables them to improve their performance more quickly; however, CMS did not provide physicians interim performance feedback. However, with only annual feedback from CMS, physicians may be missing an opportunity to improve their performance on a more frequent basis. Asked if more frequent reporting was considered, CMS officials cited concerns about the time it would take to generate each set of reports. With each round, the agency would need to attribute all beneficiaries to a medical group, risk-adjust and standardize the cost measures, and compute the benchmarks for each measure. In addition, providing interim reports on quality data would require certain providers to report more frequently. For example, providers who submit via registry would need to finalize their data more often than annually. However, experts and CMS officials have stated that, with continued adoption of advanced data reporting technology, CMS may be able to generate reports more frequently. CMS provided general information on its website and through the Medicare Learning Network, to assist providers in understanding the performance feedback and VM. Unlike private entities, CMS has not provided tailored guidance or action steps to help providers improve their scores. However, CMS resources included steps to access reports, a review of methodology, suggested ways to use the data in reports, and contact information for technical support. A representative acting on behalf of a medical group could access the group’s QRUR. In addition, CMS’s web-based reports allowed providers to access further detail on the Medicare beneficiaries attributed to the group. For example, physicians could view their patients’ percentage of total cost by type of service and hospital admission data. CMS included explanatory information within the reports for providers. In addition to comparative performance data, reports made available in September 2013 included a description of the attribution methods, the number of providers billing in each medical group, information about each attributed patient’s hospitalizations during the year, and other details about the group’s performance. In addition, CMS included within the QRUR a glossary of terms used in the feedback report. Payers have been refining their performance reports for physicians, a key component of their VBP initiatives. Private entities have selectively rolled out their feedback programs, generally applying them to relatively large groups of primary care physicians participating in medical homes and ACOs. Although they are not uniform in their approaches, the entities in our study used their discretion to select a limited number of quality and utilization/cost measures, calculated them using claims data, and used them to assess performance against a variety of benchmarks. In response to physicians’ needs, their feedback reports tended to be frequent, timely, and dynamic. CMS’s approach to performance reporting faces some unique challenges. First, it is driven by the statutory requirement that, by 2017, Medicare pay FFS physicians in groups of all sizes, including specialists, using a VM. Second, the agency has had to develop the feedback program in the context of pre-existing incentive programs, such as PQRS. CMS finalized several key changes to the feedback program for future reporting periods, as it expands the application of the VM to all physicians. Specifically, CMS continues to modify program components such as measures and reporting mechanisms as it works to align the reporting and feedback aspects of multiple programs. Despite these program modifications, we found that certain features of private entities’ feedback programs, which are lacking in CMS’s program, could enhance the usefulness of the reports in improving the value of physician care. CMS’s use of a single nationwide benchmark to compare performance on quality and cost ignores richer benchmarking feedback that could benefit physicians. Private entities in our study measured provider performance against several benchmarks. CMS’s reliance on a national average as the sole benchmark precludes providers from gauging their performance relative to their peers in the same geographic area. Without such contextual information, providers lack the feedback to better manage their performance and target improvement efforts. Additionally, CMS disseminates feedback reports only once a year (for example, September 2013). This gives physicians little time (October through December) to analyze the information and make changes in their practices to score better in the next measurement period. The private entities we reviewed sent reports more than once a year, and reported that greater frequency of reporting enabled more frequent improvements. Without interim performance reports, providers may not be able to make needed changes to their performance in advance of their annual VM payment modifications. Our findings also support past GAO recommendations that CMS reward physicians for improvement as well as performance against absolute benchmarks, and develop a strategy to reliably measure solo and small practices, such as by aggregating data. As CMS implements and refines its physician feedback and VM programs, the Administrator of CMS should consider taking the following two actions to help ensure physicians can best use the feedback to improve their performance: Develop performance benchmarks that compare physicians’ performance against additional benchmarks such as state or regional averages; and Disseminate performance reports more frequently than the current annual distribution—for example, semiannually. We provided a draft of this report to HHS for comment. In its written response, reproduced in appendix III, the department generally agreed with our recommendations, and reiterated our observation that the agency faces unique challenges with its mandate to report to Medicare FFS providers in groups of all sizes that encompass all specialty care areas. HHS conditionally agreed with our recommendation that reporting physician performance using multiple benchmarks would be beneficial, but asked for further information on private entities’ practices and their potential use for Medicare providers. As we stated in the report, private entities generally use two or three different types of benchmarks to provide a variety of performance assessments. We found alternative benchmarks that could enhance Medicare feedback reporting by allowing physicians to track their performance in their own historical and geographic context. For example, some entities’ reports included physician group performance on certain measures relative to their past performance, a recommendation we previously made to HHS in December 2012. Although it agreed to consider developing benchmarks for performance improvement, HHS has yet to do so. A comparison to past performance allows a medical group to see how much, if at all, it has improved regardless of where it stands relative to its peers. In this way, CMS can motivate physicians to continuously improve their performance. In addition, some entities in our review compared physician performance data to statewide or regional-level benchmarks. Because of the number of Medicare physicians, CMS has extensive performance data, which could enable more robust localized peer benchmarks than any individual health plan could generate. As we noted, such benchmarks reflect more local patterns of care that may be more relevant to physicians than comparisons to national averages alone. HHS further asserted that, because the physician feedback program’s key purpose is to support the national VM program, it is appropriate to limit reporting to a single national benchmark. HHS expressed concern that displaying other benchmarks could be misleading and confusing for the purposes of the VM. However, CMS’s reports provide a group’s VM payment adjustment in a concise, one-page summary, as shown in figure 9. We do not believe that additional benchmark data, displayed separately, would detract from the information provided on the summary page, and could enhance the value of the reports for physicians. HHS agreed with our second recommendation to disseminate feedback reports more frequently than on an annual basis. As seen in the private entity practices of using rolling or preliminary data for interim reporting, disseminating reports more frequently can assist physicians in making improvements to their performance before CMS determines their VM payment adjustment. HHS commented that producing more frequent reports would first require modifying the PQRS data collection schedules. For example, groups of EPs that use the web interface and registry options currently are only required to submit data to CMS once a year. The registry option will eventually require groups to submit data to CMS on a quarterly or semiannual basis, and HHS noted that these requirements would have to be synchronized with the timing of data submission through the web interface and EHR options. The agency also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and the Administrator of CMS. The report also is available at no charge on GAO’s website at http://www.gao.gov. If you or your staffs have any questions regarding this report, please contact me at (202) 512- 7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. This appendix contains information on the similarities and differences between private entities’ and Medicare’s performance reporting to hospitals. The private entities in our study provided feedback through a variety of value-based payment (VBP) initiatives and several entities have made accountable care organizations the focus of their feedback programs. Payers’ efforts to provide feedback to hospitals on their performance are centered on rewarding higher-quality and lower-cost providers of care. We followed the same methodology for comparing how private entities and the Centers for Medicare & Medicaid Services (CMS) conduct performance feedback reporting for hospitals as we did for examining physician-focused feedback programs. We interviewed representatives of the nine selected private entities about their feedback reporting to hospitals, if any, with regard to report recipients, data sources used, types of performance measures and benchmarks, frequency of reporting, and efforts to enhance the utility of performance reports. One statewide health care collaborative in our review was established through a partnership between the state medical society and hospital association, and only provides feedback reports to hospitals. We similarly requested sample feedback reports for hospitals. We interviewed CMS officials and obtained CMS documentation on its hospital feedback reporting activities, and compared these to private entity efforts. We also reviewed a sample CMS hospital feedback report from July 2013. CMS’s hospital VBP efforts over the past decade have evolved to provide performance feedback to a range of hospital types, with a focus on acute care hospitals. In 2003 the agency began with a quality incentive demonstration program designed to see whether financial incentives to hospitals were effective at improving the quality of inpatient care, and to publicly report that information. Since then, a number of laws have required CMS to conduct both feedback reporting and VBP programs for hospitals. These included the following: The Medicare Prescription Drug, Improvement, and Modernization Act of 2003, which required the establishment of the Hospital Inpatient Quality Reporting Program, a pay-for-reporting initiative.required CMS to make downward payment adjustments to hospitals The act also that did not successfully report certain quality measures. That downward payment adjustment percentage was increased by the Deficit Reduction Act of 2005. The Patient Protection and Affordable Care Act established Medicare’s Hospital VBP Program for inpatient care provided in acute care hospitals. Under this program, CMS withholds a percentage of all eligible hospitals’ payments and distributes those funds to high- performing hospitals. In reviewing current feedback reporting practices, we found that private entities and CMS report to hospitals on similar performance measures and that entities’ feedback generally contains publicly available data. Table 1 compares features of the hospital feedback produced by those private entities in our study that report to hospitals through a VBP initiative and CMS’s hospital VBP program. Table 2 summarizes the number of quality measures included in sample physician feedback reports we received from private entities in our study. These entities used their discretion to determine which measures to include in their reports. We analyzed the measures focused on quality of care and categorized them into common areas. In addition to the contact named above, individuals making key contributions to this report include Rosamond Katz, Assistant Director; Sandra George; Katherine Perry; and E. Jane Whipple. Electronic Health Record Programs: Participation Has Increased, but Action Needed to Achieve Goals, Including Improved Quality of Care. GAO-14-207. Washington, D.C.: March 6, 2014. Clinical Data Registries: HHS Could Improve Medicare Quality and Efficiency through Key Requirements and Oversight. GAO-14-75. Washington, D.C.: December 16, 2013. Medicare Physician Payment: Private-Sector Initiatives Can Help Inform CMS Quality and Efficiency Incentive Efforts. GAO-13-160. Washington, D.C.: December 26, 2012. Medicare Program Integrity: Greater Prepayment Control Efforts Could Increase Savings and Better Ensure Proper Payment. GAO-13-102. Washington, D.C.: November 13, 2012. Medicare Physician Feedback Program: CMS Faces Challenges with Methodology and Distribution of Physician Reports. GAO-11-720. Washington, D.C.: August 12, 2011. Value in Health Care: Key Information for Policymakers to Assess Efforts to Improve Quality While Reducing Costs. GAO-11-445. Washington, D.C.: July 26, 2011. Medicare: Per Capita Method Can Be Used to Profile Physicians and Provide Feedback on Resource Use. GAO-09-802. Washington, D.C.: September 25, 2009. Medicare: Focus on Physician Practice Patterns Can Lead to Greater Program Efficiency. GAO-07-307. Washington, D.C.: April 30, 2007.
Health care payers—including Medicare—are increasingly using VBP to reward the quality and efficiency instead of just the volume of care delivered. Both traditional and newer delivery models use this approach to incentivize providers to improve their performance. Feedback reports serve to inform providers of their results on various measures relative to established targets. The American Taxpayer Relief Act of 2012 mandated that GAO compare private entity and Medicare performance feedback reporting activities. GAO examined (1) how and when private entities report performance data to physicians, and what information they report; and (2) how the timing and approach CMS uses to report performance data compare to that of private entities. GAO contacted nine entities—health insurers and statewide collaboratives—recognized for their performance reporting programs. Focusing on physician feedback, GAO obtained information regarding report recipients, data sources used, types of performance measures and benchmarks, frequency of reporting, and efforts to enhance the utility of performance reports. GAO obtained similar information from CMS about its Medicare feedback efforts. Private entities GAO reviewed for this study selected a range of measures and benchmarks to assess physician group performance, and provided feedback reports to physicians more than once a year. Private entities almost exclusively focused their feedback efforts on primary care physician groups participating in medical homes and accountable care organizations, which hold physicians responsible for the quality and cost of all services provided. They limited their feedback reporting to those with a sufficient number of enrollees to ensure the reliability of reported measures. The entities decided on the number and type of measures for their reports, and compared each group's performance to multiple benchmarks, including peer group averages or past performance. All the entities used quality measures, and some also used utilization or cost measures. Because of the variety of quality measures and benchmarks, feedback report content differed across the entities. Some entities noted that in addition to national benchmarks, they compared results to state or regional level rates to reflect local patterns of care which may be more relevant to their physicians. Most health insurers spent from 4 to 6 months to generate their performance reports, a period that allowed them to amass claims data as well as to make adjustments and perform checks on the measure calculations. Commonly, private entities issued interim feedback reports, covering a 1-year measurement period, on a rolling monthly, quarterly, or semiannual schedule. They told GAO that physicians valued frequent feedback in order to make changes that could result in better performance at the end of the measurement period. Feedback from the Centers for Medicare & Medicaid Services (CMS) included quality measures determined by each medical group, along with comparison to only one benchmark, and CMS did not provide interim reports to physicians. The agency has phased in performance feedback in order to meet its mandate to apply value-based payment (VBP) to all physicians in Medicare by 2017, a challenge not faced by private entities. In September 2013, CMS made feedback reports available to 6,779 physician groups. While private entities in this study chose the measures for their reports, CMS tied the selection of specific quality measures to groups' chosen method of submitting performance data. Although both CMS and private entities focused their feedback on preventive care and management of specific diseases, CMS's reports contained more information on costs and outcomes than some entities. While private entities employed multiple benchmarks, the agency only compared each group's results to the national average rates of all physician groups that submitted data on any given measure. CMS's use of a single benchmark precludes physicians from viewing their performance in fuller context, such as relative to their peers in the same geographic areas. CMS's report generation process took 9 months to complete, several months longer than health insurers in the study, although it included more steps. In contrast to private entity reporting, CMS sent its feedback report to physicians once a year, a frequency that may limit physicians' opportunity to make improvements in advance of their annual payment adjustments. The Department of Health and Human Services generally concurred with GAO's recommendations and asked for additional information pertaining to the potential value of using multiple benchmarks to assess Medicare physicians' performance. The Administrator of CMS should consider expanding performance benchmarks to include state or regional averages, and disseminating feedback reports more frequently than the current annual distribution.
A structured settlement is the payment of money for a personal injury claim in which at least part of the settlement calls for future payment. The payments may be scheduled for any length of time, even as long as the claimant’s lifetime, and may consist of installment payments and/or future lump sums. Payments can be in fixed amounts, or they can vary. The schedule is structured to meet the financial needs of the claimant. For years, structured settlements have been widely used in the tort area to compensate severely injured, often profoundly disabled, tort victims. Cases generally involve medical malpractice and other personal injury. The Federal Tort Claims Act (FTCA) is the statute by which the United States authorizes tort suits to be brought against itself. With certain exceptions,it makes the United States liable for injuries caused by the negligent or wrongful act or omission of any federal employee acting within the scope of his or her employment, in accordance with the law of the state where the act or omission occurred. Generally, a tort claim against the United States is barred unless it is presented in writing to the appropriate federal agency within 2 years after the claim accrues. In addition, the National Childhood Vaccine Injury Act of 1986, as amended, created a mechanism for compensating persons injured by certain pharmaceutical products. The act established the National Vaccine Injury Compensation Program (VICP) as an alternative to traditional product liability and/or medical malpractice litigation for persons injured by their receipt of one or more of the standard childhood vaccines required for admission to schools and by certain employers. VICP is “no- fault.” That is, claimants need not establish that the vaccine was defective, or that any degree of negligence was involved in its administration. The only liability-related question is causation—did the vaccine cause the injury for which compensation is sought? The industry standard of practice requires the use of a licensed broker or insurance agent to obtain a settlement annuity. DOJ’s Civil Divisionestimated that structured settlements constitute between 1 and 2 percent of all settlements in litigated tort cases. Brokers receive no direct compensation from the government; rather, they are compensated by the insurance company from whom the annuity is purchased. The insurance company typically pays the brokers’ commissions, which amount to 3 or 4 percent of the annuity premium. The government attorney negotiating the case is responsible for selecting the broker. Structured settlements for the federal government are negotiated by the Civil Division’s torts attorneys, Assistant United States Attorneys (AUSAs), or agency attorneys. AUSAs are authorized to settle certain cases. An agency may not settle a tort claim for more than $25,000 without the prior written approval of the Attorney General or her designee, unless the Attorney General has delegated to the head of the agency the authority to do so. To ascertain DOJ’s policies and guidance for the selection of settlement brokers, we reviewed the Torts Branch handbook, Damages Under the Federal Tort Claim Act (section V: Settlements), and other relevant documents pertaining to broker selection policies. In addition, to obtain information about the procedures used to select brokers, we interviewed attorneys in DOJ’s Civil Division and representatives from the Executive Office for United States Attorneys (EOUSA). To obtain information on broker selection policies and guidance used by federal agencies, we asked DOJ to identify other federal agencies that handled structured settlement claims. DOJ identified six agencies—HHS and VA; the Air Force, Army, and Navy; and the U.S. Postal Service. At each of the six agencies, we met with officials who were responsible for negotiating structured settlement claims. We discussed their policies and procedures for selecting structured settlement brokers and asked them what factors they considered during the selection process. In addition, we obtained and reviewed a copy of the Army’s standard operating procedures pertaining to structured settlements. Also, we asked the six agencies to supply information pertaining to the number of structured settlements since May 1997. To provide the list of DOJ’s structured settlement annuities between May 1, 1997, and May 1, 1999, we used data DOJ collected from the Civil Division and the United States Attorneys Offices. The Civil Division’s data came from the Torts Branch, which routinely handles structured settlements. The United States Attorneys’ data were collected by EOUSA and include all the data received by EOUSA as of August 12, 1999. As of that date, 34 of the 94 United States Attorneys offices had reported annuity settlements during the relevant time period. We did not verify the accuracy of the information collected from the Torts Branch or EOUSA. To gain a broader understanding of structured settlements, we met with the Executive Vice President of the National Structured Settlement Trade Association (NSSTA). We obtained information concerning brokers working with federal structured settlements. We did our audit work between June and December 1999 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the United States Attorney General or her designee. Also, in January we discussed the contents of this report with VA’s Assistant General Counsel; U.S. Postal Service’ Claims Division Counsel; and the Army’s Torts Claims Division Chief. Also, we obtain comments for the Air Force and Navy from DoD’s Senior Report Analysis for the GAO Affairs Directorate. In addition, we spoke with HHS’ Associate General Counsel. The written and oral comments we received are discussed near the end of the report. Although DOJ had established policies and guidance for the selection of structured settlement brokers, the policies and guidance did not include an internal control requiring attorneys to document their reasons for selecting a specific broker. Similarly, although the six agencies we reviewed said they generally followed DOJ’s policy guidance for selecting a structured settlement broker, they were not required to document their reasons for selecting a particular broker. None of these agencies documented the reasons why they selected particular brokers. DOJ had established policies and guidance governing the selection of structured settlement brokers, but it did not require that the reasons for selecting a specific broker be documented. On July 16, 1993, the Director of the Civil Division’s Torts Branch, which is responsible for FTCA claims and litigation, issued a memorandum that was intended to supplement the guidance on structured settlements in the Damages Handbook and to codify previous informal guidance on the selection of structured settlement brokers. Neither the Damages Handbook nor the memorandum addressed documenting the reasons for selecting a specific broker. On June 30, 1997, the Acting Associate Attorney General expanded the policy guidance by issuing a memorandum to United States Attorneys. However, the new guidance did not address documenting the reasons for broker selections. Generally, the 1997 policy guidance outlined procedures concerning the selection of structured settlement brokers. These included: Every broker was to be given an opportunity to promote its services. No lists of “approved,” “preferred,” or “disapproved” brokers were to be maintained. Brokers who performed well in the past were to be appropriately considered for repeated use: however, such use could not be to the exclusion of new brokers. Attorneys were expected to look to supervisory attorneys for assistance; however, final broker selection was the responsibility of the attorney negotiating the settlement. When a structured settlement in an FTCA case included a reversionary interest in favor of the United States, the Torts Branch’s FTCA staff was to be consulted to maintain appropriate records and ensure consistency. Any activity tending toward an appearance of favoritism, any action contrary to any of the above rules, or any activity incongruent with the spirit of the memorandum was to be scrupulously avoided. According to agency officials, attorneys sometimes asked each other about their experiences with a particular broker, but the attorney negotiating the case is responsible for making the final broker selection, and is not required to consult with the FTCA staff. DOJ officials told us that in the absence of a requirement to do so, they did not document the reasons for selecting particular settlement brokers. The Comptroller General’s guidance on internal controls in the federal government, Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1), requires that all transactions and significant events are to be clearly documented and that the documentation is to be readily available for examination. The documentation should appear in management directives, administrative policies, or operating manuals and may be in paper or electronic form. All documentation and records should be properly managed and maintained. During 1999, DOJ provided its policy guidance to the six selected agencies in our review—HHS and VA; the Air Force, Army, and Navy; and the Postal Service. Generally, the selection processes the agencies said they had were similar to DOJ’s, (e.g., the attorney negotiating a case made the final decision, no list of approved or disapproved structured settlement brokers was maintained). Five agencies in our review identified various factors they considered when selecting a structured settlement broker. For example: HHS, Postal Service, and VA officials told us that they tended to select brokers with offices in the Washington, D.C., area. According to VA officials, the use of distantly located brokers created problems because of (1) differences in time zones and (2) the inability of nonlocal brokers to physically conduct work on short notice. Air Force, Navy, and VA officials told us that they put considerable weight on an impressive presentation given by the broker’s firm. HHS, Navy, Postal Service, and VA officials said they looked at the broker’s knowledge and experience in handling structured settlement cases for the federal government and based their selections on positive past experiences. Navy and Postal Service officials said they looked for brokers with a reputation for being dependable and responsible. In addition, the Army had established supplemental policies governing the selection of structured settlement brokers. According to the Army’s standard operating procedures, brokers were to be selected on a case-by- case basis according to the following criteria: (1) the broker’s ability to become a member of the negotiating team, participate in negotiations, and travel at his or her own expense; (2) the selecting administrative officer’s previous interviews with or knowledge of the broker; (3) the broker’s ability to present his views verbally (if the case requires in-person negotiations); and (4) the broker’s experience if the administrative officer is inexperienced. In certain more specialized cases, the selecting administrative officer’s choice of a specific broker must be approved by a higher authority. Even though federal agencies we surveyed said they provided policy guidance on broker selection, none of them required documentation of the reasons for selecting a structured settlement broker. In the absence of this requirement, none documented the reason for selection. DOJ has selected several structured settlement brokerage companies to handle most of the structured settlement claims. Between May 1, 1997, and May 1, 1999, DOJ used 27 different structured settlement brokerage companies to settle 242 claims for $236 million. (See table 1 for the number and total annuity costs of annuity settlements handled by brokers.) Of the 242 claims awarded, 70 percent (169 cases) were awarded to 4 brokerage companies. One of the four companies was awarded 30 percent (72 cases) of the total number of cases. The remaining 23 companies were awarded 30 percent of the total number of cases. Because DOJ did not document the reasons for selecting a particular broker, DOJ officials could not specifically say why certain companies received more business than others. However, as noted previously, DOJ officials cited a variety of reasons for selecting a specific structured settlement broker, such as experience, dependability, and knowledge of federal structured claims. According to DOJ, the companies frequently have multiple offices and brokers that compete with each other within the same company. Thus, a simple count of the number of companies could be misleading. DOJ has developed policies and guidance for selecting structured settlement brokers and disseminated this information to the six other federal agencies with authority to handle structured settlement claims that we contacted. However, the policies and guidance lacked an internal control requiring that the reasons for selecting a broker be documented and readily available for examination. This is important because without documentation of transactions or other significant events, DOJ can not be certain that its policies and guidance on selecting structured settlement brokers are being followed. Further, without documentation on the reasons settlement brokers were selected, it is more difficult to avoid the appearance of favoritism and preferential treatment in a situation where some brokers get significantly more business than others. We recommend that the Attorney General of the United States direct the Director of the Torts Branch responsible for FTCA claims and litigation, Civil Division, to develop an adequate internal control to ensure that the reasons for selecting structured settlement brokers are always fully documented and readily available for examination; and disseminate this guidance to federal agencies, including those in our survey, responsible for handling structured settlement claims. We requested comments on a draft of this report from the Attorney General or her designee. On January 18, 2000, the Acting Assistant Attorney General, Civil Division provided us with written comments, which are printed in full in appendix I. The Justice Department expressed appreciation that the report “outlines the many steps undertaken by the Department to ensure fairness in the broker selection process.” DOJ said its existing policies and guidance to ensure that the selection of brokers is fair are effective. Therefore, it disagreed with our recommendation that DOJ implement an adequate internal control to ensure that the reasons for selecting a specific structured settlement broker are always fully documented and readily available for examination. DOJ noted that the Comptroller General’s Standards for Internal Control in the Federal Government specify that management should design and implement internal controls based on the related costs and benefits. It stated that it was DOJ’s belief that the costs of implementing the recommendation, in terms of diversion of attention from substantive issues and generation of extra paperwork, would substantially outweigh any benefits. We recognize that determining whether to implement a particular internal control involves a judgment about whether the benefits outweigh the costs. We believe that the benefits of implementing our recommendation would outweigh any associated costs and paperwork. As stated in this report, these benefits are twofold: requiring documentation would help enable DOJ to (1) determine if its policies and guidance on selecting brokers are being followed and (2) protect DOJ from charges of favoritism towards a specific broker or brokers. Further, noting the reasons for selecting a specific broker in the case file at the time the selection is made would appear to require only minimal paperwork or cost. For example, a concise memo to the file stating the rationale for the selection would suffice. DOJ also expressed concern that, although we observed that most structured settlements have been awarded to a relatively small number of companies, we did not mention that many of the selected companies had multiple offices and brokers that competed for the same work. According to DOJ, by “treating as a monolith all brokers affiliated with the major companies, the draft report ignores the actual way those businesses are run and runs the risk of significantly understating the actual number of brokers competing to handle DOJ structured settlements.” In response, we have noted that according to DOJ, because structured settlement companies may have multiple offices and brokers, the number of companies could be misleading. Data were not readily available for us to determine the extent to which multiple brokers within a single company competed for the same settlement. Nevertheless, the number and cost of settlements by brokerage company show that DOJ placed the majority of its settlement work with a relatively small number of companies—a situation that still could open it up to charges of favoritism towards these companies. Cognizant officials at HHS, VA, Air Force, Army, Navy, and the Postal Service said they generally agreed with the information presented in the report. The Army provided additional information to clarify its policy for selecting structured settlement brokers, and we incorporated this information in the report where appropriate. We are sending copies of this report to Senator Orrin G Hatch, Chairman, and Senator Patrick J. Leahy, Ranking Minority Member, Senate Committee on the Judiciary; Representative Henry J. Hyde, Chairman, and Representative John Conyers, Jr., Ranking Minority Member, House Committee on the Judiciary; and the Honorable Janet Reno, the Attorney General. We are also sending copies to other interested congressional parties. Copies will also be made available to others upon request. If you or your staff have any questions, please call me or Weldon McPhail on (202) 512- 8777. Key contributors to this assignment were Mary Hall and Jan Montgomery. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch- tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the Department of Justice's (DOJ) policy and guidance for selecting structured settlement brokers, focusing on: (1) the policies and guidance for selecting structured settlement brokers used by DOJ and six selected agencies; and (2) a list of the structured settlement brokerage companies used by DOJ and the number of settlements awarded to each company since May 1997. GAO noted that: (1) in 1993 and 1997, DOJ issued policies and guidance on the selection of structured settlement brokers to promote fairness and to avoid the appearance of favoritism; (2) DOJ officials told GAO that its policies and guidelines permit some discretion and that when selecting a particular broker, they generally relied on such factors as reputation, past experience, knowledge, and location; (3) however, DOJ officials also told GAO they were unable to specify reasons why attorneys selected particular brokers to settle specific cases, because DOJ did not require documentation of these decisions; (4) without an internal control requiring the reasons for selecting a particular settlement broker be documented and readily available for examination, it is more difficult to verify that selection policies and guidelines were followed and, in turn, to avoid the appearance of favoritism and preferential treatment; (5) overall, the six federal agencies surveyed described policies and guidance in selecting structured settlement brokers that were similar to DOJ's; (6) none of the agencies had internal controls requiring their attorneys to document their reasons for selecting a specific broker; (7) one agency had a written supplemental policy governing the use of structured settlements, but it did not require documentation of decisions; (8) officials at the other five federal agencies said they also generally relied on such factors as reputation, past experience, knowledge, and location for selecting a particular structured settlement broker; (9) however, the reasons why particular brokers were selected for specific cases were not documented; (10) GAO's review of the list of structured settlement brokerage companies used by DOJ and the number of settlements assigned to each company showed that DOJ selected a few companies to handle most of its structured settlement business; (11) according to DOJ, the companies frequently have multiple offices and brokers that compete with each other within the same company; (12) thus, a simple count of the number of companies could be misleading; (13) although DOJ used 27 different structured settlement companies to settle 242 claims for about $236 million between May 1, 1997, and May 1, 1999, 70 percent (169 cases) were awarded to 4 brokerage companies; and (14) of the remaining 23 companies, none were awarded more than 17 cases each.
Contracts, grants, cooperative agreements, and other transactions are among the tools DOD has to support or acquire research. The instruments are not interchangeable, but rather are to be used according to the nature of the research and the type of government-recipient relationship desired. Contracts are procurement instruments and, as such, are governed by the Federal Acquisition Regulation (FAR) and DOD procurement regulations. Contracts are to be used when the principal purpose of the project is the acquisition of goods and services for the direct benefit of the federal government. In contrast, grants, cooperative agreements, and other transactions are assistance instruments used by DOD when the principal purpose is to stimulate or support research and development efforts for more public purposes. Assistance instruments are generally not subject to the FAR or DOD procurement regulations, thereby providing DOD a considerable degree of flexibility in negotiating terms and conditions with the recipients. Between fiscal years 1990 and 1994, DOD cited the authority provided under 10 U.S.C. 2371 to enter into 72 agreements, of which 56 were categorized as other transactions and 16 as cooperative agreements. At time of award, the planned contributions by DOD and recipients totaled about $1.5 billion. DARPA has been the primary user of the authority, entering into all 56 agreements that were identified as other transactions. The Air Force and Navy entered into a total of 16 cooperative agreements, while through fiscal year 1994 the Army had not entered into any agreements using this authority. For various policy and implementation reasons, DOD generally did not enter into assistance relationships with commercial organizations prior to the enactment of 10 U.S.C. 2371 in 1989. However, 59—or about 82 percent—of the agreements entered into under the authority of 10 U.S.C. 2371 were with consortia comprised primarily of for-profit firms. This high number of consortia-led projects was due in part to the fact that most of the programs under which the agreements were entered into—such as the Technology Reinvestment Project (TRP)—required or expected that some type of partnership arrangement be formed. Nearly all of the remaining agreements were entered into with single commercial firms. Appendix I provides additional information on various recipient characteristics. The use of cooperative agreements and other transactions appears to provide some opportunities to remove barriers between the defense and civilian industrial bases, in particular by attracting firms that traditionally did not perform research for DOD. In a previous report, we pointed out that government acquisition requirements have caused some companies to separate their defense and commercial research and development organizations or to decline accepting government research and development funds. The flexibility inherent in these instruments has enabled DOD to attract firms that have historically declined to participate in research projects sponsored under a contract—such as Cray Research, Hewlett-Packard, and the commercial division of IBM—to participate in one or more projects either as a consortium member or as a single party. Overall, based on information provided by DOD and recipient officials, we estimate that about 42 percent of the 275 commercial firms that participated in 1 or more agreements were firms that traditionally had not performed research for DOD. DOD officials stressed that a contracting officer cannot elect to use a cooperative agreement or other transaction to attract a nontraditional firm when the principal purpose of the research is for the direct benefit of the government. However, they indicated that for projects in which the use of such instruments was appropriate, the ability to attract such firms was a significant benefit, especially in those areas in which these firms’ technological capabilities exceed those possessed by traditional defense firms. For example, in 1 Air Force agreement, 14 firms, including 5 that traditionally had not performed research for DOD, entered into a $60 million cooperative agreement to develop computer interface standards. The consortium manager told us that the commercial firms involved would not have participated had DOD imposed standard FAR clauses for certified cost and pricing data or intellectual property provisions. The Air Force program manager noted that the consortium has both large, multinational firms like IBM, as well as small, specialized companies working together. Representatives from the consortia and the Air Force believed that the mix of participants facilitated information exchange and consensus building on the interface standards. Discussions with DOD officials and recipients indicated that the specific terms and conditions that led to the decision to participate varied from company to company. For some, such as IBM, it was the ability to use their commercial accounting systems rather than establish systems or practices that complied with government-unique requirements; for others, such as Hewlett Packard, it was the ability to limit the government’s access to and audits of the firm’s financial records or the increased flexibility in the allocation of intellectual property rights that were key factors in their decision to do business with DOD. A 1994 other transaction with a Hewlett-Packard-led consortium provides insights into how the authority was used to negotiate terms and conditions affecting both financial management and intellectual property matters that are atypical of contracts, grants, or standard cooperative agreements. We had previously reported that Hewlett-Packard declined to accept government research and development funds to protect its technical data rights. In this case, however, Hewlett-Packard responded to a DARPA announcement soliciting proposals to advance the state of the art in the manufacture of more affordable optoelectronics systems and components. According to DARPA, this technology will enable data transmissions at high rates from high performance parallel processors at far lower costs than current technology allows. Under the agreement, the financial management provisions require consortium members to maintain adequate records to account for federal funds received under the agreement, and account for the members’ contributions toward the project. The members are required to have an accounting system that complies with generally accepted accounting principles, but commercial firms do not have to follow the accounting requirements specified by the FAR. The agreement does not require an annual audit and does not specifically provide DARPA or our office direct access to these records. Rather, for up to 3 years after the agreement is completed, these records may be subject to an audit by an independent auditor, who will provide a report to DARPA. In comparison, under a cost-reimbursement research contract, a traditional defense contractor would be typically required to (1) follow the FAR accounting requirements, (2) undergo audits, and (3) provide the federal contracting agency and our office with access to the contractors’ pertinent records. Similarly, the intellectual property provisions were structured to provide Hewlett-Packard more flexible provisions than typically allowed under contracts, grants, or standard cooperative agreements, all of which are governed by the provisions of Public Law 96-517, as amended. The provisions of this act, commonly referred to as the Bayh-Dole Act, provide the government’s general policy regarding patent rights in inventions developed with federal assistance and are intended, in part, to facilitate the commercialization and public availability of inventions. In general, the government’s policy is to allow the contractor to elect to retain title to the subject invention while providing the government a nonexclusive, nontransferable, irrevocable, paid-up license to practice or have practiced for or on behalf of the United States any subject invention throughout the world. Recipients must comply with certain administrative requirements. For example, under a research contract, a contractor is required to notify the government of an invention within 2 months after it has been disclosed to contractor personnel responsible for such matters. Large contractors are required to notify the government in writing whether they intend to retain rights to that invention within 8 months after disclosing the invention to the government, while small businesses are provided up to 24 months. Failure to comply with these administrative requirements provides the government the right to obtain title to an invention. Under the Hewlett-Packard agreement, the intellectual property provisions were structured so that the consortium has up to 4 months after the inventor discloses a subject invention to his company to notify the government; the consortium has up to 24 months to inform DARPA whether it intends to take title to inventions arising from the agreement after its disclosure to the government; DARPA agreed to delay exercising its government purpose license rights to inventions in which the consortium retains title until 5 years after the agreement is completed; and the consortium has the authority to maintain inventions and data as trade secrets for an unspecified period of time under certain conditions. Further, under the agreement, DARPA does not receive any rights to any technical data produced under the agreement unless DARPA invokes its “march-in” rights. These rights can be invoked only if the consortium fails to reduce an invention to practical application or for other specified reasons, such as in the case in which the consortium grants another firm an exclusive right to use or sell the invention in a product that is substantially manufactured outside of the United States or Canada. In combination, these terms provide the consortium additional time to commercialize the technology, while somewhat limiting the government’s rights to that technology. These clauses illustrate the trade-offs that DOD may face as it attempts to attract firms that have not traditionally performed research for the government or move toward more commercial-like practices. Many of the oft-cited barriers to integrating the defense and civilian industrial bases, such as government cost accounting and auditing requirements, rights in technical data, and other government unique requirements, were instituted to safeguard or protect the government’s and taxpayer’s interests, assist suppliers, or help achieve a variety of national goals. In the Hewlett-Packard example, two of the government’s traditional methods of oversight—audits and access to records—were not included, while the government’s standard rights to information developed under federally sponsored research are somewhat constrained. DARPA and service program management and contracting officials acknowledged that there may be some added risks to the government due to the less stringent oversight requirements. However, most indicated that factors such as the recipient’s interest in having the project succeed (given its commercial applications), the recipient’s willingness to cost share, and the tendency of consortium members to self-police its agreements (since each member wants to assure that its partners are contributing as agreed), acted to reduce that risk. Similarly, DARPA officials commented that the added flexibility within the intellectual property provisions would assist the firms’ efforts to develop and commercialize the technology. The instruments appear to be fostering new relationships and practices within the defense industry, especially for those projects being undertaken by consortia. Under a consortium, members mutually develop and sign articles of collaboration, which cover such issues as the consortium’s management structure, each member’s technical and financial responsibilities, and the exchange or protection of each member’s proprietary information. Several officials we interviewed noted that developing the articles of collaboration tended to be contentious and time-consuming. Once the consortium is established, however, DOD officials and recipients indicated that a synergistic effect tended to occur because of the exchange of information under consortia, thereby expediting technology development. For example, recognizing their common interest in developing more affordable composite engine components, General Electric and Pratt & Whitney agreed to collaborate with material suppliers on a $32 million project. These two firms—normally competitors—developed mutually agreeable terms that balanced proprietary interests with research objectives. According to Air Force officials responsible for the effort, there was better information flow and greater technical progress using this joint approach than if each firm had undertaken the project separately. Depending on the project, DOD program management and contracting officials viewed themselves as being more actively involved in coordinating and facilitating activities than performing a traditional government oversight function. However, DOD officials and recipients we spoke with noted that negotiating cooperative agreements was significantly different than negotiating contracts, in which most provisions are governed by a standard FAR clause and in which negotiations tend to focus on the cost proposal. These officials noted that since the FAR is not applicable to assistance instruments, more provisions were subject to negotiation. DOD officials and consortia representatives noted that moving away from the traditional reliance on FAR-based contracting approaches and clauses to which they are accustomed and increasing the use of assistance instruments would require significant cultural or mindset changes by both parties. The potential exists for traditional defense contractors to use cooperative agreements and other transactions to develop or use new practices that may be viewed as more efficient or less cumbersome than those employed in acquisition programs under FAR-based contracts. Officials from such firms, however, generally indicated that given their investment in systems that complied with FAR or DOD requirements and the need to use these systems for procurement contracts, developing or using alternative practices was not considered cost-effective. Leveraging the private sector’s financial investment is considered an important element of projects sponsored by a cooperative agreement or other transaction for several reasons. First, by having commercial firms contribute to the cost of developing technologies with both military and commercial applications, DOD hopes to stretch its research funding. Secondly, cost-sharing is seen as appropriate since commercial firms are intended to benefit financially from sales of the technology. Finally, DOD officials indicated that the participants’ contributions demonstrated commitment to the project and enabled less rigid government oversight requirements, since the firms were expending their own resources. Participants’ contributions may be in cash or in-kind contributions, such as the use of equipment, facilities, and other assets. As shown in table 1, the 72 agreements DOD entered into between fiscal years 1990 and 1994 have a current value of about $1.7 billion, toward which participants have agreed to contribute about $1.0 billion, or about 58 percent. Measured another way, participants planned to contribute about $1.39 for each dollar provided by DOD. It should be noted that the government’s actual share of the projects’ costs may be higher than indicated by table 1. Under FAR 31.205-18(e), research costs incurred by contractors under projects entered into under 10 U.S.C. 2371 should be considered allowable IR&D expenses if such costs would have been allowed in the absence of the agreement. Consequently, to the extent that participants use IR&D as their cost-share contributions and include such costs as overhead under other government contracts, a portion of these costs subsequently will be reimbursed by DOD. Participants also were allowed to propose the value of prior research as part of their cost-sharing contributions. These contributions do not represent the cost of prior research, but rather the estimated value of that research for the current project. On several agreements, DOD’s acceptance of prior research enabled firms to offset their current contributions significantly. For example, in one DARPA agreement, 89 percent of the consortia’s planned contribution of approximately $4.7 million was attributable to the value of prior research. Similarly, in three other agreements, more than 50 percent of the consortia’s planned contributions consisted of the value of prior research. Overall, we estimate that participants’ planned contributions included about $98 million—or about 10 percent—in the form of the value of prior research, with such contributions representing more than 20 percent in 8 of the 72 agreements. DOD officials expressed various views as to whether the value of prior research should be accepted and to what extent. For example, an Army official told us that while they believed prior research should be taken into consideration in evaluating the project’s risk, he expressed some reservation about accepting prior research as a cost-share contribution. Similarly, a February 1995 Air Force memorandum noted that while it was permissible to accept the value of prior research as a cost-share contribution, Air Force negotiators should proceed with caution. The memorandum noted that evaluating such contributions is complicated and that grant officers have a responsibility to ensure that the prior research is relevant to and brings value to the proposed effort. DARPA officials noted that while cash or concurrent in-kind contributions are the more preferred forms of contributions, they believed that the value of prior research is acceptable in certain circumstances, such as when the participant possesses significant technical knowledge but is unable or unwilling to provide cash or in-kind contributions. Accordingly, DARPA officials told us they did not place a limit on the percentage of prior research that could be accepted. Conversely, the Navy generally included a provision in its agreements that limited the contributions of intellectual property, patents, trade secrets, and other nonfederal sources to not more than 10 percent of the participants’ planned cost-sharing contributions. While 10 U.S.C. 2371 does not prohibit DOD from accepting the value of prior research as part of the participants’ cost share, the legislation requires that to the extent that the Secretary deems practicable, the funds provided by the government under the cooperative agreement or other transaction should not exceed the total amount provided by other parties to the agreement. Accepting prior research in lieu of concurrent financial or in-kind contributions may obscure each party’s relative contributions in the current project. Our review identified two emerging issues pertaining to instrument selection and structure of cooperative agreements and other transactions. First, we found that DARPA always designated its agreements as “other transactions,” while the services always employed “cooperative agreements.” While the instruments share many similar characteristics, DARPA officials indicated that a DARPA other transaction did not require participants to be subject to annual audit and generally did not require recipients to provide our office with access to their pertinent financial records. In contrast, Air Force officials indicated that their cooperative agreements generally required an annual audit, though not necessarily access to records by our office, while Navy officials indicated that their agreements generally required both. The selection of different instruments, coupled with different treatment of specific issues among the services, has led to some confusion among firms that were negotiating agreements with both DARPA and the services. Second, there remains some disagreement within DOD regarding intellectual property provisions. While DOD officials agree that cooperative agreements are subject to the provisions of the Bayh-Dole Act, there is less consensus regarding other transactions. DARPA officials maintain that other transactions entered into under the authority of 10 U.S.C. 2371 are not subject to the Bayh-Dole Act because, in their opinion, the act only applies to contracts, grants and standard cooperative agreements. In support, they noted that Congress has twice commented favorably on DARPA’s use of other transactions to provide more flexible intellectual property provisions. However, a representative from the Office of Naval Research’s Office of Corporate Counsel argued that the provisions of the Bayh-Dole Act are applicable to such agreements. The representative stated that it was his office’s position that the act was to be interpreted broadly as to which types of instruments were covered. Reaching resolution on the issue may be important as DOD attempts to expand its research base. For example, while Air Force and Navy officials noted that they have been able to negotiate intellectual property provisions with participants that are consistent with Bayh-Dole, DARPA officials contended that the ability to provide more flexible intellectual property provisions than would be possible under Bayh-Dole was instrumental in reaching their agreements. DOD is updating its February 1994 draft guidance on the use of these instruments, in part to provide more consistency in the selection and structure of the agreements. However, DOD was unable to provide an estimate on when the revised guidance would be issued. Because inconsistent selection of a particular instrument and treatment of specific clauses may unnecessarily increase confusion for government and industry users and may hinder their effective use, we recommend that the Secretary of Defense ensure that DOD’s revised guidance on the use of cooperative agreements and other transactions promotes increased consistency among DOD components on the selection and structure of these instruments. In particular, the guidance should specifically address the extent that the value of prior research should be accepted as part of a participant’s cost-sharing contribution and the extent to which these instruments are subject to the provisions of the Bayh-Dole Act and under what conditions. In commenting on a draft of this report, DOD generally concurred with the thrust of our findings and recommendation. DOD noted that it shared our assessment that the instruments, if used appropriately, could be valuable tools that help DOD take advantage of technology development in the commercial sector. DOD’s comments are presented in their entirety in appendix III. DOD officials also provided technical and editorial comments on a draft of this report. We have incorporated their comments where appropriate. We are sending copies of this report to other congressional committees; the Secretaries of Defense and Commerce; the Administrator, National Aeronautical and Space Administration; and the Director, Office of Management and Budget. Copies will be provided to other interested parties upon request. Please contact me at (202) 512-4587 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix IV. The Department of Defense (DOD) entered into 72 agreements using the authority of 10 U.S.C. 2371 between fiscal years 1990 and 1994. Of these agreements, 59, or about 82 percent, were with consortia, which were comprised of some 400 participants. Based on information provided by DOD officials and participants, we estimate that about two-thirds of consortia participants were for-profit commercial firms. Of the 13 agreements with single participants, 12 agreements were awarded to for-profit firms. Overall, we estimate that about 42 percent of the 275 commercial firms that participated in one or more agreements were firms that traditionally had not performed research for DOD. Table I.1 shows selected characteristics of participants of cooperative agreements and other transactions between fiscal years 1990 and 1994. To determine the number of cooperative agreements and other transactions DOD entered into using the authority of 10 U.S.C. 2371, we reviewed the annual reports and notifications DOD submitted to Congress from fiscal years 1990 to 1993. As the fiscal year 1994 report was not available during our review, we requested information from DARPA and the services regarding their fiscal year 1994 usage. We included in our review only those other transactions that were used principally in an assistance-type relationship with commercial firms or consortia for government-sponsored research projects. Consequently, we excluded one agreement that was entered into under the authority provided by section 845 of the National Defense Authorization Act for Fiscal Year 1994 (P.L. 103-160, Nov. 30, 1993). This authority is distinct from agreements entered into under 10 U.S.C. 2371 as it enables DARPA to conduct prototype projects that are directly relevant to weapons or weapon systems proposed to be acquired or developed by DOD. Further, we did not attempt to identify to what extent DOD had used the authority of 10 U.S.C. 2371 to enter into other assistance-type relationships, such as in cases where DOD loaned equipment to firms to conduct research or in reimbursable arrangements that allow a firm to conduct experiments aboard a government experimental launch vehicle. To characterize the agreements and analyze each participant’s financial or technical contributions to the agreement, we reviewed the agreement file, which generally included the agreement, articles of collaboration, the contracting officer’s agreement analyses, legal review, funding documentation, and other pertinent information. We summarized key elements of the agreement, including the recipient’s planned cost-sharing information, and requested that DOD verify our interpretation or provide additional information. We did not attempt to independently verify the financial information we obtained. Further, we did not attempt to determine the extent to which participants were using DOD funds to conduct projects that would have been undertaken in the absence of DOD funding. To obtain the views on the benefits and risks of using such instruments, we interviewed program management and contracting officials from DARPA, the Navy, and the Air Force, as well as representatives from various participants. We also interviewed senior management individuals from each of the services and DARPA, and from the following organizations: Office of the Director, Defense Research and Engineering; Office of the Director, Defense Procurement; Office of the Assistant Secretary of Defense (Economic Security); and Office of the Deputy Under Secretary of Defense (Acquisition Reform). Some DOD officials cautioned against making broad comparisons between the terms and conditions found in contracts with those found in cooperative agreements and other transactions since the principal purpose of the instruments—acquisition and stimulation, respectively—differs significantly. However, as acknowledged by DOD officials, DOD’s relationship with commercial firms has generally been through procurement contracts. Consequently, comparing the instruments can be illustrative of the types of changes and issues that may arise as business practices evolve. We conducted our work from May 1994 to December 1995 in accordance with generally accepted government auditing standards. Rae Ann Sapp James R. Wilson Shari A. Kolnicki The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO evaluated the Department of Defense's (DOD) use of cooperative agreements and other transactions to further its objectives of: (1) helping to reduce the barriers to integrating the defense and civilian sectors of the industrial base; (2) promoting new relationships and practices within the defense industry; and (3) allowing the government to leverage for defense purposes the private sector's financial investment in research and development of commercial products and processes. GAO also discussed two emerging issues concerning the selection and structure of the instruments. GAO found that: (1) cooperative agreements and other transactions appear to have contributed to reducing some of the barriers between the defense and civilian industrial bases by attracting firms that traditionally did not perform research for DOD; (2) the instruments have enabled the use of more flexible financial management and intellectual property provisions than those typically found in contracts and grants; (3) the instruments appear to be fostering new relationships and practices within the defense industry, especially for projects being undertaken by consortia; (4) DOD has partially offset its own costs by sharing project costs with recipients, but the DOD practice of accepting the value of recipients' prior research efforts in lieu of concurrent financial or in-kind contributions may increase the actual DOD monetary share of the project's costs; (5) differences between DARPA and the military services regarding the selection of instruments and treatment of specific provisions have led to some confusion among firms that were negotiating agreements with different DOD components; and (6) DOD is revising its interim regulations to provide clearer guidance on the instruments' selection, use, and structure.
IRS has two categories of traditional audits for large corporations: Coordinated Industry Cases (CIC) and Industry Cases (IC). CAP includes both CICs and ICs, with the majority being CICs (about 88 percent). The distinction between CIC and IC is based on criteria such as the amount of gross assets, amount of gross receipts, number of operating entities, multiple industry status, total foreign asset amounts, and amount of foreign taxes paid. All corporations that are designated as CICs are audited every year by a team of IRS staff while corporations that are designated as ICs are not held to the same requirement. Before IRS created CAP, large corporations waited about 50 months on average from the time a tax return was filed to IRS closing of the traditional audit under CIC. This did not include additional time after the audits were closed to resolve any appeals of the audit results—a process that usually took another 2 or 3 years. Therefore, obtaining certainty about the status of tax issues reported on the tax return for a given tax year may have taken 6 or 7 years—if not longer. This created significant burdens and problems for taxpayers and IRS. As already noted, taxpayers had to reflect uncertainty about their tax liability on their financial statements. This meant that taxpayers had to put aside reserves in a provisional account meant for changes in taxes owed after an audit. Large corporation tax officials we spoke to said that when corporations have a large reserve, it lowers their expected profits. In addition, IRS might not become aware of emerging tax issues associated with new types of business transactions or products for years. Traditional audits of large corporations still take years, as shown in figure 1. IRS launched CAP in March 2005 as a pilot program to assess its viability as an alternative approach to large corporate tax administration. For the pilot, IRS selected voluntary, corporate taxpayers based on factors including their working relationship with IRS, a history of not taking aggressive tax positions, and not having major tax litigation under way. IRS created pre-filing and post-filing stages for CAP. In pre-filing, a taxpayer works with the CAP team to resolve issues of tax controversy and to determine the appropriate tax treatment of completed transactions before filing the tax return. CAP requires contemporaneous exchange of information about a taxpayer’s completed transactions and other events that may affect tax liability. In post-filing, the CAP audit team reviews the tax return to confirm adherence to agreements on how tax issues should be reported. In exchange for increased cooperation and transparency, the taxpayer may attain tax certainty sooner and with less administrative burden, compared to traditional post-filing audits. After 6 years of experience with the CAP pilot, IRS management decided to make CAP permanent in March 2011. IRS had received positive feedback from taxpayers and employees on the CAP pilot, indicating they wanted CAP to continue. Officials who had worked at IRS told us that IRS did exploratory research on the pilot’s impacts on timeliness as well as on other areas. In making CAP permanent, IRS created three phases by adding the Pre- CAP and Compliance Maintenance phases to the existing CAP phase. To avoid confusion, we will refer to the entire CAP program (all three phases) as CAP to distinguish it from the CAP phase defined below. Pre-CAP phase: To enter CAP, all audits for prior tax years must be resolved. The large corporate taxpayer and IRS work together in the traditional post-file audit process to close audits of tax returns filed for previous tax years within agreed upon time frames. Closing these audits can take significant time if, as suggested by figure 1, audits are open for many tax years. CAP phase: The large corporate taxpayer and IRS work to identify tax issues and resolve how they are to be reported for the current tax year in the pre-filing stage. IRS then confirms these issues are reported as agreed upon during the post-filing review. Compliance Maintenance phase: Large corporate taxpayers who continue to meet the CAP phase eligibility requirements and expectations may progress to Compliance Maintenance after at least 1 year in the CAP phase. Taxpayers must continue to be compliant and transparent in filing tax returns and pose no new tax compliance risks. IRS intends to spend less time examining these returns when filed as well as significantly reducing the scope and depth of the pre-filing review. IRS’s guidance explains that the criteria for considering a taxpayer for Compliance Maintenance include the complexity and number of issues, and the taxpayer’s history of CAP compliance, cooperation, and transparency. Taxpayers are expected to continue disclosing completed business transactions, information on material items and issues that occur during the CAP year, and their proposed tax positions on these disclosed items. The Internal Revenue Manual states that taxpayers can be removed from Compliance Maintenance depending on the complexity, number of issues, or other factors. Not every large corporation is eligible for CAP. Table 1 shows the criteria for each phase. Taxpayers must sign a memorandum of understanding (MOU) that outlines their agreement to meet the CAP requirements. For Pre-CAP, the MOU will be effective for the first Pre-CAP year and continue until audits of tax returns for all transition years are closed and the CAP phase selection criteria are fulfilled. For the CAP phase, the MOU applies to a single tax year known as the CAP year. For Compliance Maintenance, the CAP phase MOU is used. To continue in the CAP and Compliance Maintenance phases, taxpayers must reapply and execute a new MOU for each CAP year. Of about 60,000 large corporate taxpayers considered for audit in fiscal year 2012, 161 were in CAP. Figure 2 shows the growth of CAP since its inception in 2005. CAP taxpayers work with team and account coordinators who serve as the primary IRS representative for all federal tax matters. In the Pre-CAP phase, taxpayers work with an assigned team coordinator; in the CAP phase or Compliance Maintenance phase, taxpayers work with an assigned account coordinator. Other IRS team members include IRS examiners, the team manager, the territory manager, and the Director of Field Operations, as well as various IRS specialists (as needed) who offer technical knowledge about tax issues and large corporations. CAP work on a tax return may be concluded in one of three ways: 1. Full Acceptance Letter—IRS provides the taxpayer with a full acceptance letter if the pre-filing stage concludes with a taxpayer fully complying with the MOU and with resolutions of all identified items and issues through factual clarification, closing agreement(s), issue resolution agreement(s), or a combination of these. The letter constitutes written confirmation that IRS will accept the taxpayer’s return if it is filed consistent with those resolutions and no additional items or issues are discovered during the post-filing review that were not previously disclosed. 2. Partial Acceptance Letter—IRS provides the taxpayer with a partial acceptance letter if the pre-filing stage concludes with a taxpayer fully complying with the MOU, but IRS and the taxpayer cannot resolve all identified items or issues before the tax return is filed. The letter constitutes written confirmation that IRS will accept the taxpayer’s return for the resolved items and issues if the post-filing review shows the return is filed consistent with the agreement(s). 3. Termination Letter—IRS can issue a termination letter at any time, which results in the case being withdrawn from CAP and transferred to a traditional post-filing examination. This can happen for several reasons, including the taxpayer: not adhering to information document request (IDR) response times, not responding to IDRs, or not providing complete IDR responses; not engaging in meaningful or good-faith issue resolution discussions; failing to thoroughly disclose prior, concurrent, and ongoing transactions; failing to disclose a tax shelter or listed transaction; failing to disclose an investigation or litigation that limits access by the IRS to current corporate records; or not adhering to any other MOU commitment(s). Two other new reporting requirements, one financial and one tax, have been introduced since the 2005 pilot and may reinforce CAP. The first, Financial Accounting Standards Board Interpretation Number 48, Accounting for Uncertainty in Income Taxes, implemented in June 2006, requires public and private corporations subject to the U.S. Generally Accepted Accounting Principles to disclose uncertain tax positions. Several of the tax experts we spoke to said this gives IRS an additional source for verifying the completeness of the disclosures that corporations make in the CAP phase. The second, Schedule UTP, Uncertain Tax Position Statement, has a similar disclosure purpose. IRS implemented it in 2010 as required reporting by taxpayers that have assets equaling or exceeding $50 million. Along with their tax returns, these taxpayers are to disclose information on the Schedule UTP about tax positions that may affect federal income tax liabilities or are included in audited financial statements. According to the August 2011 UTP guidance and procedures for CAP, CAP teams should compare the list of taxpayer disclosures made to them during the CAP year to the issues identified on the Schedule UTP to verify that the disclosures match. IRS envisioned that CAP would yield significant benefits to participating taxpayers and IRS by resolving tax issues in a pre-filing environment. IRS’s vision is reflected in the following seven CAP goals: ensure taxpayer compliance; reduce overall examination time; increase currency for taxpayers (e.g., working on the unfiled return for the most recent tax year during the CAP pre-filing stage); enhance the accurate, efficient, and timely resolution of complex tax issues; increase audit coverage by providing more efficient use of resources; reduce taxpayer administrative burden; and increase certainty for taxpayers. IRS account coordinators for CAP cases, other IRS officials, and non-IRS officials (such as corporate tax experts and large corporation tax officials) told us what they believe the benefits of CAP are if its goals are achieved. Because the goals are not all mutually exclusive, some benefits can be perceived as relating to more than one goal. Ensure taxpayer compliance: IRS and non-IRS officials indicated that compliance may be higher under CAP. Specifically, transactions disclosed by taxpayers in CAP may not have been disclosed or otherwise detected during traditional audits. For example, some account coordinators and large corporation tax officials who participated in our focus groups said that the risk of getting removed from CAP sometimes prompts large corporations to over-disclose issues and transactions. Increase currency for taxpayers: IRS and non-IRS officials generally agreed that CAP increases currency. Some of the officials informed us that currency reduces taxpayers’ cost to retrieve information as well as their interest expenses on unpaid tax liabilities. Currency means that the appropriate documents and corporate staff knowledgeable about a particular transaction are more likely to be available for consultation. Enhance the accurate, efficient, and timely resolution of complex tax issues: IRS and non-IRS officials generally agreed that currency could also enhance the resolution of disputes over tax issues for the reasons given above. Having documents and responsible corporate staff available should make it easier for IRS to understand the tax implications of a transaction. In addition, a full or partial acceptance letter can speed the post-filing review of resolved issues. Increase audit coverage by providing more efficient use of IRS resources: IRS and non-IRS officials generally agreed that CAP has the potential to reduce IRS resources spent auditing large corporations relative to what would have been spent on traditional audits. Some of the officials stated that most of the resource savings would occur in the Compliance Maintenance phase. Any resources saved through Compliance Maintenance could be reinvested in increasing the audit coverage of CAP and non-CAP taxpayers. As an example of resource savings, an account coordinator informed us that she was able to devote more time and resources to other audits because one of her taxpayers was in Compliance Maintenance. Reduce taxpayer administrative burden: CAP taxpayers who participated in IRS’s 2012 survey responded that they were satisfied that CAP had reduced their administrative burden compared to traditional post-file audits. While CAP may reduce taxpayer administrative burden over time, corporate tax experts indicated that CAP requires significant upfront investment from taxpayers because all audits for prior tax years must be resolved. Increase certainty for taxpayers: IRS and non-IRS officials we spoke with generally agreed that CAP can increase the certainty that taxpayers have about their tax obligations when tax returns and financial statements are filed. They specifically emphasized the value to corporations of not having to maintain reserves for potential future tax liabilities. A corporate tax expert told us that certainty is so valuable to corporations that they may be willing to concede certain tax issues to achieve it. Increased certainty has other effects, such as making it less likely that state tax returns will need to be amended, which also saves time and resources. Both IRS officials and corporate tax experts informed us that for CAP to achieve such benefits, taxpayers must be willing to disclose transactions to IRS. For this reason, transparency and communication are emphasized in the MOU. The corporate tax experts we interviewed readily acknowledged that not all large corporate taxpayers may be good candidates for CAP. As examples, they mentioned corporations that consistently take aggressive tax positions, have numerous complex transactions, or are unwilling to be cooperative and transparent with IRS. Although CAP started 8 years ago and IRS is looking to further expand CAP, IRS has not evaluated CAP’s effectiveness, whether its goals are being accomplished, or whether it should be expanded and if so, to what extent. The evidence in the preceding section about the benefits of CAP is based on viewpoints of those in our focus groups which cannot be generalized to the entire population of IRS and non-IRS officials. The Senate Committee on Appropriations, in June 2012, urged IRS to develop additional performance measures to evaluate the effectiveness of CAP. The Treasury Inspector General for Tax Administration (TIGTA) in February 2013 recommended that IRS develop an evaluation plan for CAP and IRS agreed to do so in responding to TIGTA’s report. IRS officials stated in May 2013 that they will start to develop a plan for evaluating CAP during the fourth quarter of fiscal year 2013 but they did not state when the plan would be completed or the evaluation would be done. IRS faces challenges in evaluating CAP. For example, it is not easy to measure compliance by large corporations or concepts like certainty and administrative burden. IRS has attempted exploratory research on the impacts of CAP but these efforts were not conclusive. However, without evaluations of CAP or its expansion, IRS does not have a credible base of information for understanding the current effectiveness of CAP such as for saving resources or for making decisions about any future expansion of CAP. IRS has guidelines for conducting program evaluations that could be applied to CAP. According to the guidelines, a program evaluation attempts to provide accurate, objective, and trustworthy information about program performance and to help assess the quality or value of a program. Conducting a program evaluation can help answer questions decision makers have about the basic reasons the program exists and the continuing need for it. The guidelines also allow for evaluating the costs and benefits of continuing, revising, or ending a program. The Government Performance and Results Act of 1993 (GPRA), as enhanced by the GPRA Modernization Act of 2010, requires agencies to set results-oriented goals, establish performance measures and related performance indicators with targets in meeting such goals, and report progress. Performance measures and targets are established so that actual results can be compared to planned performance or goals. As shown in table 2 and discussed below, IRS cannot show the extent to which CAP goals are being met because some goals do not have measures and none have specific targets. Ensure taxpayer compliance: IRS officials we interviewed said that, despite multiple attempts, they do not have measures or targets related to CAP taxpayer compliance because of the difficulty of measuring it. IRS has directly measured compliance for different groups of taxpayers in the past, such as through audits of randomly- selected taxpayers. IRS has not used this approach with large corporations because it may be a burden both for IRS and taxpayers. An alternative to measuring the direct impact of CAP on compliance is the use of proxy measures. During the pilot as well as after making CAP permanent, IRS tried various approaches to measure a proxy for compliance in paying taxes owed, such as the taxes paid before and after audit by CAP taxpayers compared to non-CAP taxpayers. One approach attempted to create peer groups for CAP taxpayers to compare taxes paid by CAP taxpayers with those paid by non-CAP peers. Another approach explored ways to estimate the impact of CAP on effective tax rates. However, the results were not conclusive because of the challenges of controlling for other factors that could affect taxes paid. IRS has not researched the potential use of another proxy measure for compliance: using the results from measuring the quality of work by CAP teams to identify material tax issues and to develop support for how these issues should be reported on tax returns. Without compliance measures or related proxy measures and accompanying targets, it will be difficult for Congress and IRS to know whether CAP is meeting its goal for ensuring taxpayer compliance. Reduce overall examination cycle time: IRS has defined a measure for this goal but its related data are inconsistent. This measure combines IRS’s traditional cycle time metric—the number of months or years it takes to close an audit after a tax return is filed—with the number of months for pre-filing work. Measuring the pre-filing time could provide insights on whether a taxpayer is a good candidate for CAP. For example, a long pre-filing period could be an indicator of complex tax issues or delays in providing information to IRS. However, as discussed in the next section, we found inconsistencies in the CAP data for use in measuring this goal. In particular, we could not resolve inconsistencies in the reporting of cycle time data by different sources. Increase currency for taxpayers: LB&I has data on currency for all CIC audits, but it has not broken out currency data for CAP pre-filing work or post-filing audits. IRS officials stated that they believe that all CAP cases are current (which could be a CAP target). However, they had no measures for either the pre-filing or post-filing stages or related targets. Enhance the accurate, efficient, and timely resolution of complex tax issues: IRS does not have a measure or target for this goal. IRS knows that CAP taxpayers are satisfied with their issue resolutions. According to IRS’s 2012 taxpayer satisfaction survey, 80 percent of the CAP respondents were satisfied with issue resolution in CAP. It is useful to know how taxpayers feel about issue resolution; however, IRS does not have a system to track issues that are being audited and resolved to provide data to help assess the goal. For example, IRS does not track whether the same issues repeat year after year or whether IRS and the CAP taxpayer agreed on a method to be used to resolve the issue in future tax years. Having such tracking to measure success in meeting this issue resolution goal could reduce the costs of future audits. Increase audit coverage by providing more efficient use of resources: IRS partially measures this goal by tracking certain CAP resources spent, such as the average staff hours for pre-filing and post-filing activities, but does not measure whether any savings are used to increase audit coverage. Reduce taxpayer administrative burden: IRS partially measured this goal through its 2012 customer satisfaction survey to determine whether CAP taxpayers believed that the administrative burden had decreased. However, IRS does not have a measure of the actual burden on CAP taxpayers because it has not been able to identify how to capture taxpayer burden (other than relying on survey responses). IRS has been attempting to more directly measure how CAP affects taxpayer burden but has faced challenges. In 2013, IRS’s Research, Analysis, and Statistics (RAS) division analyzed whether CAP taxpayers were less likely to use outside tax preparation services. While RAS concluded that payments for such services were significantly lower for corporations that had been in CAP for at least a year, this result might indicate that CAP corporations choose to use more in-house staff and fewer outside experts for work related to taxes rather than from being in CAP. RAS also analyzed responses to its tax year 2009 tax compliance cost survey to compare compliance costs for CAP and non-CAP corporations while controlling for other factors that might influence the costs. The results of this analysis were inconclusive. Increase certainty for taxpayers: IRS relies on its customer satisfaction survey to measure whether taxpayer certainty has increased. In its 2012 survey, 90 percent of CAP respondents said that tax certainty has increased “somewhat” or “a lot” for their corporations. In addition, an IRS official informed us in June 2013 that IRS has begun to use data it has been collecting on the percentage of taxpayers that are issued full acceptance or no change letters by tax and fiscal years to reflect certainty. IRS officials stated that the measures being used for CAP, such as cycle time, staff hours, and taxpayer satisfaction, are derived from the traditional LB&I balanced measures related to business results and customer satisfaction for all of its audits. These traditional balanced measures do not directly link to all CAP goals. According to an IRS official, IRS has not identified targets for any of its CAP measures because it does not traditionally set official targets for pilot programs. The official informed us in July 2013 that IRS is developing targets, which may be in place by the start of fiscal year 2014 pending approval by LB&I leadership. Even if IRS implements targets, IRS does not have an objective basis for determining whether CAP is effective because some goals do not have measures. To implement a performance measurement system, it is necessary to collect the right data and ensure its accuracy. Standards for Internal Control in the Federal Government calls for a variety of control activities for data being collected to help ensure that actions are taken to reduce risk. Even though some CAP measures exist, the IRS data being collected are not consistent or complete to use in determining whether the CAP goals are met. Such inconsistencies arise in part from data collection processes. An IRS official indicated that data used by IRS to manage CAP are manually transferred from multiple databases into an excel spreadsheet. Without controls or documentation to ensure and validate their consistency, we could not be assured of the validity and accuracy of the data that IRS uses to create the CAP report used by LB&I leadership. Furthermore, this limits the ability to do analyses, such as: measuring the average hours spent to close a tax return and staff time charges for a tax year by pre-filing and post-filing activities. Specifically, tracking staff time charges on pre-filing activities for CAP could not be done because the code used by IRS to track some charges also included non-CAP activities; doing an analysis of CAP time charges would help determine whether CAP saves resources compared to traditional audits. comparing taxpayer’s returns filed before and after they entered CAP to analyze any similarities and differences, accounting for when taxpayers entered CAP and moved from the Pre-CAP phase to the CAP phase. Doing an analysis may help IRS determine whether the right taxpayers are in CAP. replicating IRS management reports and TIGTA data tables on the total number of CAP taxpayers, using taxpayer identification numbers, tax years, and form types. reconciling differences in the reported number of months for cycle time. An IRS official provided a report used by LB&I leadership involved with CAP showing that the average cycle time for closed CAP audits is about 20 months for both pre-filing and post-filing work, with more than half of this time spent on post-filing work rather than pre-filing work. However, TIGTA reported average cycle time for closed audits to be about 24 months with more than half of this time spent on pre-filing work rather than post-filing (over 15 months and about 8.5 months, respectively). IRS’s Issue Management System (IMS) has a field for the type of issues audited, but IRS officials said that they did not have assurance that the issue data were consistent and complete. When CAP audit teams entered issue data into IMS, IRS did not know whether the teams were consistently selecting the proper issue codes. IRS officials indicated that entering the issue into IMS can be a significant burden because IRS has tens of thousands of issue codes to consider and one issue can be associated with multiple codes from which to choose. Furthermore, IRS does not require this issue field to be completed and does not know whether CAP teams entered all issues audited. As a result, the risk increases that data are not consistent and complete. Thus, neither we nor IRS could determine the number, types, and dollar amounts of recommended tax changes for issues addressed in CAP. In addition, data in CAP case files are not tracked and thus are not readily available to assess whether the issue resolution goal is being met. IRS has not created a system to compile data from the issue resolution agreements between CAP teams and taxpayers to guide how issues are to be reported on tax returns when corporations file; instead, these agreements are kept in a case file for each CAP taxpayer. An IRS official agreed to attempt to start electronically tracking issues reported on issue resolution agreements and said in July 2013 that the mandatory entry of data from the issue resolution agreements into the IMS would be recommended to the LB&I Business Review Board in late July 2013. Thus, the date and methodology for starting to track the issues resolved during pre-filing are not known. Even if implemented, the planned tracking would not cover all other audited issues and whether they were resolved during post-filing. Without such CAP-wide data, it will be difficult for IRS to know whether it is resolving issues uniformly across CAP taxpayers for a current tax year or addressing the same complex tax issues year after year for a CAP taxpayer. Even if IRS compiled and tracked data on all audited issues, whether or not resolved, IRS does not have a way to identify and track emerging tax issues. Compared to traditional audits, CAP could help identify new issues sooner for multiple CAP taxpayers because CAP occurs in real- time. Without this tracking, IRS cannot readily determine how quickly new issues are identified or resolved in CAP. Thus, IRS could miss opportunities to detect new types of noncompliance earlier and to share the information IRS-wide. Account coordinators in our focus groups informed us they used an informal process to relay information on emerging issues, such as to the local office counsel, but the extent to which this information circulates across IRS is not known. Similarly, as of May 2013, IRS was not tracking data on its CAP goal for resource savings to invest in increased audit coverage. Without the data on the savings and a related plan for using any savings to increase audit coverage, IRS cannot be assured that the saved resources are invested effectively on either CAP or non-CAP taxpayers with high compliance risks. IRS officials explained that a possible way to track the effect of the saved resources would be to count the number of cases that account coordinators work, expecting that some coordinators will be able to work non-CAP cases that would not have been worked. IRS has been moving taxpayers into the Compliance Maintenance phase. In this phase, IRS intends to save resources by streamlining its reviews of corporate tax returns. IRS expects that Compliance Maintenance will start generating resource savings in 2014. The number of taxpayers in Compliance Maintenance has increased from 10 in early fall 2012 to 44 as of June 2013. Although Compliance Maintenance is growing, IRS has not assessed whether it is working as intended and has not developed a plan to expand it. Standards for program management state that specific desired outcomes or results should be conceptualized, defined, and documented in the planning process as part of a road map, along with the appropriate steps and time frames needed to achieve those results. The standard practices also call for assigning responsibility and accountability for ensuring the results of program activities. Such a plan also could discuss how IRS will assimilate these taxpayers and ensure that IRS has the capacity to manage the increased workload within the expedited time frames envisioned for corporate taxpayers placed into Compliance Maintenance. Without such an assessment and plan to guide expansion, IRS will not know whether Compliance Maintenance will help meet CAP goals, such as reducing the resources devoted to pre- and post-filing, or increasing audit coverage while maintaining compliance. The lack of a documented plan for expanding Compliance Maintenance could create risks for CAP in ensuring that this phase includes the right types of taxpayers. Our reviews of IRS documents and our focus groups held with account coordinators indicated that confusion exists about how IRS would monitor taxpayers in Compliance Maintenance and how it would remove taxpayers that no longer qualify. Account coordinators that we included in our focus groups expressed uncertainty over how IRS would monitor CAP taxpayers, given that some taxpayers could potentially “game” Compliance Maintenance because this phase requires a lighter review using less resources than the CAP phase. Account coordinators were also uncertain about the process and criteria for removing taxpayers when we spoke to them in March and April 2013. In addition, they told us that a taxpayer that had been accepted into Compliance Maintenance for tax year 2013 was expected to remain in Compliance Maintenance until all of the work was completed for that tax year, regardless of whether their circumstances changed (such as tax personnel changes in the taxpayer’s business or discovery of new complex tax issues). We brought the concerns from account coordinators that participated in our focus groups to the attention of IRS. In response, IRS management informed us that they revised their draft user guide for Compliance Maintenance at the end of May 2013 and clarified the process and criteria for moving taxpayers from Compliance Maintenance back to the CAP phase. An IRS official explained that moving a taxpayer out of Compliance Maintenance would require a serious compliance risk. According to the May 2013 draft user guide, taxpayers would remain in Compliance Maintenance for the entire pre-filing phase. If their compliance risk changes, such taxpayers will remain in Compliance Maintenance for that calendar year but may be moved back to the CAP phase in the next year. Additionally, a March 2013 notification letter to CAP taxpayers approved for Compliance Maintenance highlighted that if changes occurred (such as to transparency, cooperation, or tax return filing activity) IRS might revisit whether they should remain in Compliance Maintenance. IRS has not yet verified whether these recent changes are uniformly understood by account coordinators and resolve their concerns, which they shared in our focus groups during March and April 2013. At the end of June 2013, IRS indicated agreement that verifying the changes to the guidance may help address the account coordinators’ concerns. Verifying that the changes it took resolved account coordinators’ concerns could provide IRS with reasonable assurance that its staff are including the right types of taxpayers in Compliance Maintenance and are monitoring and removing them appropriately. The potential savings from Compliance Maintenance relate to multiple CAP goals, such as increased audit coverage and taxpayer certainty, as well as reduced overall examination cycle time. CAP may not achieve its intended goals if IRS does not have a clear plan or criteria for how it would expand Compliance Maintenance and monitor and ensure that the right taxpayers are in this phase. Regardless of whether taxpayers are in CAP, they have access to other existing IRS processes to help resolve tax issues with IRS. Examples include accelerated issue resolution, advance pricing agreements (APA) to help resolve transfer pricing issues, Appeals, early referral to Appeals, fast track settlement, industry issue resolution, and settlement authority for coordinated issues. For example, CAP taxpayers retain the same appeal rights as non-CAP taxpayers if they disagree with IRS’s audit findings. As for IRS staff, both CAP and traditional audit teams can contact Issue Practice Groups (IPG) for technical advice on specific tax issues. IPGs are designed to foster effective collaboration and sharing of knowledge and expertise across LB&I and Chief Counsel. In our focus groups, account coordinators that requested assistance from IPGs generally reported that IPGs responded in a timely manner and prioritized their inquiry over non-CAP audits, even though IRS does not formally require that priority. According to IRS account coordinators for CAP cases, other IRS officials, corporate tax experts, and large corporation tax officials that we interviewed or that participated in our focus groups, the expedited nature of a CAP audit has led to difficulties in coordinating IRS reviews of APAs and research credits—both take longer to complete than the pre-filing time frame of CAP allows. IRS has initiated some efforts to address such difficulties: For APAs, IRS highlighted in the 2013 CAP MOU that complex transfer pricing issues may require additional time beyond the typical CAP time frame to reach an agreement and consequently, could result in a partial acceptance letter. The MOU also states that the account coordinator is responsible for contacting the appropriate Advance Pricing and Mutual Agreement (APMA) Team to ensure ongoing coordination between the CAP and APMA programs. For research credits, IRS implemented the General Business Credits IPG, which consists of IRS’s technical experts on corporate tax credit issues, to provide assistance in reaching a resolution. As mentioned previously, IRS does not systematically track tax issues for CAP in a centralized manner. To the extent IRS begins tracking how long it takes to audit and resolve specific tax issues, IRS would have data to assess the effectiveness of these efforts to better coordinate CAP with APAs and research credit claims. IRS and non-IRS officials highlighted the need for IRS specialists to better understand the cooperative and transparent culture necessary for CAP to succeed. For example, in IRS’s 2012 survey of taxpayers, the most frequently cited barriers to the CAP review process were the lack of training and urgency by specialists that CAP teams rely on for advice. Specifically, CAP taxpayers in the survey perceived a lack of urgency and cooperation by the specialists working on CAP audits. Moreover, account coordinators we included in our focus groups expressed some concerns that the need for cooperation between IRS and the taxpayer in the CAP environment had not been fully adopted by specialists throughout IRS. Without a cooperative and transparent environment, CAP teams are hindered in coordinating with specialists, which may affect IRS’s ability to achieve its goals. To address these concerns, IRS is acting on the following efforts to improve coordination with these IRS specialists: Joint training is being conducted on CAP to include all CAP team members, specialists, and taxpayers. Monthly meetings on CAP are being set up with the Tax Executives Institute, which represents the tax executives of large corporations, to discuss common concerns and practices, such as those with specialists. Procedures are being updated to help with transparency. For example, CAP teams, including specialists, must discuss IDRs with the taxpayer before they are filed. Work is being done to improve specialists’ knowledge of the corporations whose audits they are involved in, as well as their methods of communication with such corporations. IRS officials indicated that they have taken these actions to ensure that coordination has improved between taxpayers and CAP teams (including specialists), and to monitor whether taxpayer concerns about specialists have been addressed. However, it is too early to tell whether IRS’s efforts will work. CAP is an ambitious effort to improve tax audits of large corporations. It holds the promise of significant benefits for participating corporations in the form of increased certainty about tax liability and reduced administrative burden. It also has the potential to save IRS resources, which could be reallocated to increase audit coverage. Attracted by these benefits, IRS has been expanding the number of taxpayers in CAP and the related Compliance Maintenance phase. However, despite several efforts, IRS has not succeeded at assessing whether or not CAP is achieving its goals. While anecdotal evidence indicates that CAP may be effective at ensuring compliance, increasing certainty, saving resources, and achieving other goals, a CAP-wide assessment could validate these results and help ensure that support for CAP, both inside and outside IRS, does not wane. The lack of a CAP assessment is related to missing or incomplete performance measures, nonexistent targets for the CAP goals, and incomplete or inconsistent data. Without a full suite of measures and targets as well as appropriate data for program evaluation, fully assessing whether CAP is achieving its goals is not possible. The consequences of not establishing performance measures and not collecting the data needed to track performance can be significant. For example, one CAP goal is to resolve complex tax issues, but IRS does not track whether they are being resolved. As a result, IRS will not be able to tell whether CAP is dealing with disputes over the same issues year after year. Similarly, IRS does not track whether CAP is identifying emerging tax issues. Without such tracking, IRS cannot share information about emerging issues across IRS, creating a risk that some IRS audits might miss the issue or treat taxpayers inconsistently. For its goal on increasing audit coverage by saving resources through CAP, IRS did not have a way to collect data on the resources saved. Without that data, IRS cannot readily plan for how any savings will be used to increase audit coverage. Finally, without an assessment and plan for expansion, the rationale and pathway for expanding Compliance Maintenance are missing. In addition, the extent to which CAP staff understand how taxpayers are to be monitored and, if necessary, removed from Compliance Maintenance has not been verified. Absent a clear plan for expanding Compliance Maintenance and ensuring that account coordinators clearly understand guidance on monitoring and removing taxpayers, it will be difficult for IRS to provide reasonable assurance that Compliance Maintenance accepts the most compliant and cooperative corporate taxpayers, which is necessary for this phase to work as intended and to contribute to CAP goals, such as generating resource savings to increase audit coverage. To ensure that IRS is meeting the stated goals of CAP, we recommend that the Principal Deputy Commissioner of Internal Revenue and Deputy Commissioner for Services and Enforcement take the following seven actions: Develop an evaluation plan for CAP, using IRS’s guidelines for conducting program evaluations, that can track progress against the goals, and determine whether and how much to expand CAP. Develop measures for each CAP goal and set related targets. Consistently and completely capture data needed to track progress against the CAP goals. Track all CAP tax issues and at a minimum, identify whether they are resolved or not resolved, and whether any are new or emerging issues that should be shared IRS-wide. Track savings from Compliance Maintenance and CAP overall and develop a plan for reinvesting any savings. Develop a plan for expanding Compliance Maintenance. Verify that the updated guidance for Compliance Maintenance on monitoring and removing CAP taxpayers has resolved CAP staff concerns about how these tasks are to be accomplished. We provided a draft of this report to IRS for review and comment. The Acting Deputy Commissioner for Services and Enforcement at IRS provided written comments, which are reprinted in appendix II. IRS plans to implement all seven of our recommendations but was not clear on the extent to which its plans will fully address our first six recommendations, as highlighted below. For the first recommendation, IRS stated that it would develop and execute a program evaluation for CAP by June 30, 2014. However, IRS did not state whether the plan would track progress against goals and determine whether and how to expand CAP. For the second recommendation, IRS stated that it will develop a balanced measures scorecard for CAP, but it is unclear whether IRS intends to develop measures for each CAP goal and set related targets as part of this action. For the third recommendation, IRS stated that it will capture additional data points to track progress on the CAP goals, but did not indicate how it will ensure that the data are consistent and complete, accounting for problems we discussed in this report. For the fourth recommendation, IRS stated that it will mandate use of the Issue Resolution Agreement tool to track CAP issues, but did not clarify how it will track CAP issues that were not resolved. For the fifth recommendation, IRS stated that it will identify resource savings from CAP by June 30, 2014, but did not mention how it will develop a plan for reinvesting any savings. For the sixth recommendation, IRS stated that it annually evaluates whether a CAP taxpayer should be included in the CAP Compliance Maintenance phase. However, IRS did not list actions it will take to develop a plan for expanding this phase. This report provides details on what a plan might cover (such as steps to be taken at certain times by specified parties to achieve desired results) as well as IRS’s capacity to assimilate more CAP taxpayers and manage the workload within expedited time frames for this phase. IRS also provided technical comments on the draft report, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Chairmen and Ranking Members of other Senate and House committees and subcommittees that have appropriation, authorization, and oversight responsibilities for IRS. We are also sending copies of the report to the Principal Deputy Commissioner of Internal Revenue and Deputy Commissioner for Services and Enforcement, the Secretary of the Treasury, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions or wish to discuss the material in this report further, please contact me at (202) 512-9110 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. The objectives of this report were to (1) describe the goals and potential benefits of the Compliance Assurance Process (CAP), (2) assess the Internal Revenue Service’s (IRS) efforts to determine whether CAP is meeting its goals, (3) assess IRS’s readiness to expand Compliance Maintenance, and (4) describe IRS’s efforts to coordinate CAP with existing processes for large corporate compliance. To address these objectives, we reviewed IRS’s strategic plan from 2009 to 2013, IRS’s guidance for CAP and conducting program evaluations, a CAP management report, CAP research studies, results from IRS’s 2012 CAP taxpayer survey, IRS documents pertaining to other compliance processes, our prior work on corporate audits and the tax gap, and other relevant literature. We also assessed IRS’s efforts on CAP by comparing them with criteria in the Government Performance and Results Act (GPRA) of 1993, GPRA Modernization Act of 2010, and the Standards for Internal Control in the Federal Government. We also attempted to replicate analysis of IRS’s data and tried to conduct our own analyses of CAP taxpayers but could not do so because of inconsistent data sources from multiple systems. Further, we interviewed officials from IRS’s Large Business and International Division, who were responsible for managing CAP; Research, Analysis, and Statistics Division, who were responsible for conducting exploratory CAP research studies; and the Treasury Inspector General for Tax Administration. In addition, we interviewed 11 corporate tax experts, including former IRS employees who were familiar with CAP or had some experience with CAP. We also conducted one focus group with tax executives of 12 large corporations to understand their experiences using CAP; six focus groups with 22 account coordinators responsible for being the primary IRS representative for CAP audits; and two focus groups with 11 managers of IRS’s Issue Practice Groups responsible for providing guidance and advice to CAP teams on resolving complex tax issues. To select the 22 CAP account coordinators for our six focus groups, we received a list from IRS and drew a random sample to ensure a variety of independent viewpoints of those employees who work on different accounts in varied locations. We conducted this performance audit from July 2012 through August 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Tom Short (Assistant Director); Ben Crawford; Sara Daleski; Amy Radovich; Susan Sato; Jehan Chase; Edward Nannenhorn; and Cynthia Saunders made key contributions to this report. Tax Policy: The Research Tax Credit’s Design and Administration Can Be Improved. GAO-10-136. Washington, D.C.: November 6, 2009. Tax Administration: IRS’ Advance Pricing Agreement Program. GAO/GGD-00-168. Washington, D.C.: August 14, 2000. Tax Administration: IRS Measures Could Provide a More Balanced Picture of Audit Results and Costs. GAO/GGD-98-128. Washington, D.C.: June 23, 1998. Internal Revenue Service: IRS Initiatives to Resolve Disputes Over Tax Liabilities. GAO/GGD-97-71. Washington, D.C.: May 9, 1997. Tax Administration: Factors Affecting Results from Audits of Large Corporations. GAO/GGD-97-62. Washington, D.C.: April 17, 1997. Tax Administration: Audit Trends and Taxes Assessed on Large Corporations. GAO/GGD-96-6. Washington, D.C.: October 13, 1995. Tax Policy: Additional Information on the Research Tax Credit. GAO/T-GGD-95-161. Washington, D.C.: May 10, 1995. International Taxation: Transfer Pricing and Information on Nonpayment of Tax. GAO/GGD-95-101. Washington, D.C.: April 13, 1995. Tax Policy: Information on the Research Tax Credit. T-GGD-95-140. Washington, D.C.: April 3, 1995. Tax Policy and Administration: Information on Transfer Pricing. GAO/GGD-94-206R. Washington, D.C.: September 15, 1994. Tax Administration: Compliance Measures and Audits of Large Corporations Need Improvement. GAO/GGD-94-70. Washington, D.C.: September 1, 1994.
IRS audits of the tax returns filed by large corporations take four years, on average, to complete. Additional years can be spent in appeals. This is costly to IRS and creates years of uncertainty about a large corporation's actual tax liability. In response, IRS developed CAP in 2005. Under this process, IRS and taxpayers agree on how to report tax issues before the return is filed: compliant and cooperative taxpayers can get a streamlined IRS review of their tax return in a phase called Compliance Maintenance. GAO was asked to assess this process. In this report, GAO (1) describes the goals and potential benefits of the process, (2) assesses IRS's efforts to determine whether these goals are met, (3) assesses IRS's readiness to move more taxpayers into Compliance Maintenance, and (4) describes IRS's efforts to coordinate the process with its existing compliance processes. GAO reviewed IRS documentation and data, interviewed IRS officials and corporate tax experts, and held focus groups with tax executives of large corporations and with IRS audit staff. Officials GAO interviewed inside and outside of the Internal Revenue Service (IRS) generally agreed on the potential major benefits of the Compliance Assurance Process (CAP) to taxpayers and IRS as reflected in its goals. These goals include saving IRS time and resources to use for other audits while ensuring compliance, and reducing taxpayer burden while increasing certainty on tax amounts now owed. Contrary to its guidelines, IRS has not evaluated whether the goals are being achieved or the process should be expanded. IRS officials told GAO that IRS will start developing an evaluation plan in 2013, but did not provide dates for when an evaluation would be completed. IRS cannot show the extent to which the goals are being met for two reasons. Some goals do not have measures and none have targets. Although developing measures and setting targets for goals (such as ensuring taxpayer compliance) can be difficult, not doing so limits IRS's ability to determine whether the process is working as intended. IRS does not have consistent and complete data for CAP. Inconsistent data make some analyses difficult to do. For instance, the average annual staff hours spent auditing a return could not be analyzed because the code to track staff time charges sometimes included non-CAP time charges. Incomplete data did not allow IRS to track progress for some goals. While IRS audit teams document tax issues in case files, IRS does not compile the data to track issue resolution. As a result, IRS cannot readily determine whether audit teams are resolving issues uniformly or identifying emerging tax compliance issues. Similarly, IRS does not have a system to track resource savings. Without a system, IRS cannot know the amount of saved resources or plan for their reallocation. IRS has been moving taxpayers into Compliance Maintenance without documenting a plan to ensure, among other things, that IRS has the capacity to assimilate these taxpayers in an expedited fashion, as intended. In addition, IRS audit staff had concerns about guidance on moving taxpayers in and out of this phase. IRS clarified the guidance in May 2013, but has not verified whether the audit staff understand it. Without verification, IRS does not have reasonable assurance that the audit staff understand which taxpayers are right for Compliance Maintenance and when it would be appropriate to remove them. IRS is addressing difficulties in coordinating CAP with other compliance processes. Difficulties include resolving some complex tax issues within the expedited time frames, and ensuring that all IRS specialists who assist audit teams understand the process. It is too early to tell whether IRS's efforts will work. GAO recommends that IRS evaluate the process, develop measures and targets for the goals, consistently capture data to track goal progress, track resolution of tax issues and resource savings, develop a plan to expand Compliance Maintenance, and verify that audit staff understand attempts to clarify related guidance. In written comments, IRS agreed with our recommendations.
The domestic auto industry—including automakers, dealerships, and automotive parts suppliers—contributes substantially to the U.S. economy, but has faced financial challenges in recent years. According to the Congressional Research Service, more than 435,000 U.S. automotive manufacturing jobs have been eliminated since 2000—an amount equal to about 3.3 percent of all manufacturing jobs in 2008. The employment level first dipped below 1 million in 2007 and fell to 880,000 workers in 2008. The Detroit-based automotive manufacturers—GM, Chrysler, and the Ford Motor Company—have seen their share of the domestic market drop from 64.5 percent in 2001 to 47.5 percent in 2008. Prior to restructuring, GM and Chrysler reported losses in 2008 totaling $31 billion and $8 billion, respectively. Concerned that the collapse of a major U.S. automaker could pose a systemic risk to the nation’s economy, in December 2008, Treasury established the Automotive Industry Financing Program (AIFP) under TARP. Through June 2009, $81.1 billion in AIFP funding has been made available to assist the auto industry. The largest part of the program’s funding—about $62 billion—was provided to help GM and Chrysler fund their operations while they restructured. In exchange for this funding, the Treasury has become part-owner of the two new companies that re- emerged, receiving 60.8 percent of the equity in the new GM and 9.85 percent of the equity in the new Chrysler, and has a debt interest of about $14 billion in loans between the two. Given the large taxpayer investments in GM and Chrysler, in a recent report, we recommended th Treasury report to Congress on how it plans to assess and monitor the companies’ performance to help ensure the companies are on track to repay their loans and to return to profitability. In response, Treasury said the agency intends to develop an approach for reporting on its investments its goal of in the auto industry that strikes an appropriate balance between transparency and the need to avoid compromising either the competitive positions of these companies or Treasury’s ability to recover taxpayer More broadly, we also previously recommended that Treasury funds. better communicate to external stakeholders, including Congress, about its TARP strategies and activities to improve the integrity, accountability, and transparency of the program. In response to this recommendation, Treasury noted that it was implementing a communication strategy to provide key congressional stakeholders more current information about its TARP activities. AIFP also established the Auto Supplier Support Program—a mechanism to extend credit to auto suppliers. Under this program, Treasury committed to fund up to $3.5 billion in loans to special purpose entities created by new GM and new Chrysler for the purpose of ensuring payment to suppliers. The program was designed to ensure that automakers receive the parts and components they need to manufacture vehicles and that suppliers have access to liquidity on their receivables. According to Treasury officials, the program will terminate in April 2010. As a condition of receiving federal financial assistance, GM and Chrysler were also required to develop restructuring plans to identify how the companies planned to achieve and sustain long-term financial viability. Prior to retrctring, GM was ublicly trded compny tht employed abt 240,000 people worldwide. It hd mctring fcilitie in 34 contrie nd old more th dozen nd of vehicle in abt 140 contrie. GM’ core U.S. nd re Bick, Cdillc, Chevrolet, nd GMC; other nd inclded Dewoo, Holden, Hmmer, Opel, Pontic, Saab, Sarn, Vauxhll, nd Wling. To implement the restructuring plans, both companies filed voluntary petitions for reorganization under Chapter 11 of the U.S. Bankruptcy Code. During the bankruptcy process, newly organized companies for both GM and Chrysler were established in the summer of 2009. These new companies purchased substantially all of the operating assets of the previous companies, while the old companies, which retained very few assets but most of the liabilities, continued in bankruptcy. The new companies also streamlined operations and substantially reduced their debt. Changes included reductions in the number of brands and models, closing factories and dealerships, and reducing their hourly and salaried workforces through early retirements, buyouts, and layoffs. GM filed for Chpter 11 bankrptcy protection on Jne 1, 2009, nd on Jly 5, 2009, the bankrptcy cort pproved the sale of subsntilly ll of old GM’ asset to newly formed compny, referred to as “new GM.” The new GM assumed ponorhip of oth of old GM’ U.S. qualified defined enefit pl. Prior to retrctring, Chryler was privtely held compny tht employed abt 54,000 people worldwide, inclding mctring fcilitie in for contrie nd vehicle assemled nder contrct in for other. Chryler’jor nd inclde Dodge, Chryler, nd Jeep. Automakers are highly dependent on a large motor vehicle parts supply industry. The auto supply chain consists of networks of suppliers, transportation carriers, fabrication sites, assembly locations, distribution centers, and locations by which components, services, information and products flow. The supply chain starts with suppliers who assemble raw components into more complex components which are processed or combined with additional components and eventually brought together by top-level suppliers to manufacture end products for use by the automaker. Each level in the supply chain depends on the financial health of the other for its survival. Chryler filed for Chpter 11 bankrptcy protection on April 30, 2009, nd on Jne 9, 2009, the bankrptcy cort pproved the sale of subsntilly ll of old Chryler’ asset to newly formed compny, referred to as “new Chryler.” The new Chryler assumed ponorhip of ll Chryler’ U.S. qualified defined enefit pl. The U.S. auto supply sector became unstable as the domestic market share of the global automotive marketplace declined, prices for raw materials and petroleum increased, and production cuts ensued. These financial pressures affected various levels of the supply chain, leading some suppliers to file for bankruptcy, including the nation’s largest U.S. auto supplier, Delphi Corporation (a spin off of GM), which filed for bankruptcy in 2005. About one-half of all U.S. workers participate in some form of employer- sponsored retirement plan, typically classified either as a defined benefit or as a defined contribution plan. Defined benefit plans generally offer a fixed level of monthly retirement income based upon a participant’s salary, years of service, and age at retirement, regardless of how the plan’s investments perform. In contrast, benefit levels for those with defined contribution plans depend on the contributions made to individual accounts (such as 401(k) plans) and the performance of the investments in those accounts, which may fluctuate in value. Over the last two decades, much of the private sector pension coverage has moved away from traditional defined benefit plans in favor of defined contribution plans and hybrid defined benefit plans, thereby increasing portability for workers as they change jobs, but also shifting the risk and burden of financing retirement from employers to employees. Domestic automakers sponsor some of the largest private sector defined benefit plans. According to a financial publication, as of year-end 2007, GM sponsored the largest defined benefit plans by a considerable margin, with nearly 60 percent more benefit obligations than the plan sponsor ranked second: AT&T, Inc. The Ford Motor Company ranked fifth. At the time, Delphi, the auto supplier that spun off from GM in 1999, ranked 18th. Chrysler was not included in the publication’s list, but, as of the beginning of 2008, it had about one-fourth of GM’s benefit obligations, and would have ranked in the top 10 if its total benefit obligations were included on this list. Based on data gathered for previous GAO reports, in 2004, the plans sponsored by GM and Chrysler represented roughly 7 percent of the liabilities, 7 percent of the assets, and 2.5 percent of the total participants of the entire defined benefit system. The defined benefit plans that continue to be sponsored by the new GM and the new Chrysler are summarized in table 1. Unlike the new GM and new Chrysler, the “new Delphi” that emerged from Delphi’s bankruptcy reorganization did not assume sponsorship of the company’s pension plans. After Delphi froze its hourly pension plan in November 2008, some Delphi hourly employees began to accrue credited service in the GM hourly pension plan according to the terms of agreements negotiated with various unions, while other Delphi employees did not receive similar treatment. PBGC terminated all six of Delphi’s U.S. qualified defined benefit plans in July 2009. Three federal agencies are charged with responsibility for overseeing and regulating tax-qualified private sector pension plans: the Internal Revenue Service (IRS), an agency within Treasury; the Employee Benefits Security Administration, an agency within Labor; and PBGC, a government corporation. Two overlapping statutory sources provide the basis for this oversight: the Internal Revenue Code, and the Employee Retirement Income Security Act of 1974 (ERISA). These laws specify, among other things, the standards of fiduciary responsibility for managing these plans, minimum funding requirements, the requirements for reporting information to the federal government and plan participants, and plan termination insurance. PBGC was created by ERISA in 1974 as a federal guarantor of most private sector defined benefit plans and currently insures the pension income of nearly 44 million workers in over 29,000 plans. PBGC is a self-financing entity, funding its operations through insurance premiums paid by the plan sponsors, money earned from investments, and funds received from terminated pension plans. It is governed by a three-member board of directors consisting of the Secretary of Labor as the Chair, and the Secretaries of Commerce and Treasury as the remaining members. The board of directors is ultimately responsible for providing policy direction and oversight of PBGC’s finances and operations, but the board members often rely on their representatives to conduct much of the work on their behalf. Currently, the board representatives for the members are the Assistant Secretary of Labor for the Employee Benefits Security Administration, the Under Secretary for Economic Affairs at the Department of Commerce, and the Assistant Secretary of the Treasury for Financial Institutions. PBGC administers two separate insurance programs for private sector defined benefit plans: a single-employer program and a multiemployer program. The single-employer program covers about 34 million participants in about 28,000 plans. The multiemployer program covers about 10 million participants in about 1,500 collectively bargained plans that are maintained by two or more unrelated employers. If a multiemployer pension plan is underfunded and unable to pay guaranteed benefits when due, PBGC will provide financial assistance to the plan, usually a loan, so that retirees continue receiving their benefits. However, if a single-employer pension plan is underfunded and certain criteria are met, the plan sponsor may request termination of the plan (referred to as a “distress” termination), and PBGC will pay retirees’ benefits as they become due, up to certain limits as prescribed under statute and related regulations (see appendix II). PBGC may also initiate an “involuntary” termination under certain circumstances, such as when the possible long- run loss to PBGC is expected to increase unreasonably if the plan is not terminated. As of the end of fiscal year 2009, PBGC had terminated and trusteed a total of 4,003 single-employer plans. We designated PBGC’s single-employer pension insurance program as “high risk” in 2003, including it on our list of major programs that need urgent attention and transformation. The program remains high risk due to an ongoing threat of losses from the termination of underfunded plans. As of September 2009, PBGC had an accumulated deficit that totaled $22 billion, a $10.8 billion increase since September 2008. As new companies, GM and Chrysler have streamlined their operations and have substantially less debt than their predecessors; nevertheless, the future viability of the companies and their pension plans is unclear. The bankruptcy agreements that provided for establishment of the new companies specified that they would assume sponsorship of the previous companies’ U.S. qualified defined benefit plans, and made only one significant change to pension benefits. However, prior to the change in sponsorship, many of the pension plans had been closed to new hires or had ceased benefit accruals. Moreover, since 2008, the funded status of the pension plans has been declining, and within the next 5 years, both companies project that, based on current estimates, they may need to make large contributions to their plans to comply with federal minimum funding requirements. As a result of restructuring, sponsorship for all GM and Chrysler U.S. defined benefit plans shifted to the new companies. But beyond the shift in sponsorship, the only significant change to pension benefits that occurred was the elimination of a future pension benefit increase that was to compensate UAW retirees for increased required contributions to their retiree health care plans, beginning in 2010. For the most part, the terms of the restructuring called for current levels of employee benefits— including pension benefits—to remain in place for at least 1 year. Specifically, the master sale agreements for both companies stipulate that, in general, union employees are to be provided employee benefits that are “not less favorable in the aggregate” than the benefits provided under the employee pension and welfare benefit plans, and contracts and arrangements currently in place; nonunion employees are to receive current levels of compensation and benefits until at least 1 year after the date the agreements are signed. More significant changes affecting GM’s and Chrysler’s pensions were made prior to last year’s restructuring. For example, over the past decade, several of GM’s and Chrysler’s pension plans had been modified or closed to new hires, or had stopped allowing further benefit accruals. GM’s salaried plan was closed and benefit accruals ceased for certain employees, while 4 of Chrysler’s ten plans have been closed to new hires, and 2 other Chrysler plans have ceased benefit accruals (also referred to as being “hard frozen”). Nevertheless, new collective bargaining agreements were put in place in 2007 for both GM’s and Chrysler’s UAW- negotiated plans, calling for annual increases to the pension benefits for their participants. In addition, both GM and Chrysler had implemented numerous attrition programs for both union and nonunion employees that provided various opportunities for early retirement and other types of added benefits as incentives to help mitigate the effects of downsizing. For a listing of attrition programs offered by these companies since 2004, see appendix III. As illustrated in figure 1, the funded status of GM and Chrysler pension plans has been declining since 2008. This is due, in part, to the economic downturn, which has brought significant financial stress to many sectors of the economy, including the auto industry. The significant decline in the stock market decreased the value of certain assets (such as equities) and increased the value of others (such as bonds), while low interest rates tended to increase liabilities. Fluctuations in liabilities may also be caused by changes to actuarial assumptions or other types of gains and losses. However, in the case of GM and Chrysler, certain other factors are at play as we well. ll. For example, a reduction in the number of workers is one key factor affecting the funded status of both companies’ plans. Large numbers of workers have left employment as product lines are eliminated and plants are shut down. When workers are forced to leave their jobs before becoming eligible to retire, the liabilities for their expected future benefits will usually be less than previously recorded. However, for those workers who are eligible to retire early and choose to do so under the enhanced provisions of one of the numerous attrition programs, the liabilities for their expected future benefits will usually be greater than previously recorded. In other words, more workers will retire early and with more benefits than previously anticipated in the company’s valuation of future benefit obligations. GM began its downsizing even before its TARP-related restructuring efforts reduced the number of its North American brands from eight to four. According to a GM news release, approximately 66,000 U.S. hourly workers left the company under a special attrition program between 2006 and 2009. Often the lump-sum payments and buyouts offered by these programs were paid from company assets, but when these benefits are paid from pension assets, there can be an impact on the plan’s financial status. GM noted that the attrition programs implemented between 2006 and 2009 contributed to an increase of estimated plan obligations during this period and—along with other factors, such as discount rate changes— played a role in the recent increase in GM’s pension liabilities (see fig. 2). Similarly, Chrysler’s downsizing efforts also predate TARP. For example, its decision to eliminate four models within its three primary brands dates back to November 2007, and the company has implemented various attrition programs to accomplish this. Due in part to these programs, over the past few years, Chrysler’s pension liabilities have fluctuated while plan assets have been declining (see fig. 3). For example, Chrysler’s UAW plan reported a $900 million increase in liabilities from 2007 to 2008, and the plan’s 2008 valuation report noted that the cost of special termination benefits during 2008 were nearly $390 million. Total liabilities for the Chrysler Pension Plan increased by a smaller margin overall from 2007 to 2008, but the plan’s 2008 valuation report noted that nearly $195 million in additional costs were being recorded due to special early retirements, added service costs, and curtailment loss. Other factors that have affected the funded status of both GM’s and Chrysler’s plans are the special arrangements made with other companies in conjunction with acquisitions and divestitures. For example, when an auto parts supplier, the former Delphi Corporation, was spun off from GM in 1999, the transaction included a negotiated agreement with various unions for a benefit guarantee for certain employees in the event that Delphi’s hourly pension plan would be frozen or terminated. When the company froze its hourly plan on November 30, 2008, as agreed, GM began providing covered employees with up to 7 years of credited service in the GM hourly plan while they continued to work at Delphi. Under this negotiated benefit guarantee, GM also agreed that upon plan termination, once PBGC determined the benefit to be paid subject to its guarantee limits, GM would pay eligible covered employees the difference to “top up” the benefit to the level provided under Delphi’s hourly plan. Following the termination of Delphi’s hourly plan in July 2009, GM estimated that the cost of implementing this benefit guarantee for all covered unions would be approximately $1.0 billion. In addition to the benefit guarantee for Delphi employees still in the Delphi hourly plan, in the fall of 2008, GM’s hourly plan assumed responsibility for $2.7 billion in liabilities and $0.6 billion in assets from Delphi’s plan, thereby increasing the GM plan’s funding deficit by $2.1 billion. When Chrysler was sold by Daimler in 2007, the transaction included an agreement with Daimler to help protect the funded status of Chrysler’s pension plans. As part of this transaction, PBGC negotiated an agreement whereby Daimler provided a $1 billion termination guarantee and Chrysler made $200 million in additional pension contributions. Subsequently, in April 2009, this agreement was replaced by a new arrangement requiring Daimler to begin making annual contributions, even though the plans had not terminated. Under this arrangement, Daimler agreed to make payments totaling $600 million to Chrysler’s pension plans over a 3-year period, with $200 million due in June 2009, 2010, and 2011. In addition, if the Chrysler pension plans were to terminate before August 2012 and are trusteed by PBGC, Daimler is to pay an additional $200 million to the PBGC insurance program. Although projections of plan funding are inherently sensitive to underlying assumptions, GM and Chrysler currently estimate that they may need to make large contributions to their pension plans within the next 5 years in order to meet minimum funding requirements. They also may need to manage the funded status of their plans in order to avoid certain plan benefit restrictions and potential additional liabilities that may occur if the plans are determined to be “at risk.” While useful as indicators of the financial pressures that could lie ahead, the funding projections provided by GM and Chrysler are subject to much uncertainty because of factors that could result in changes in the size or timing of needed contributions to meet future years’ funding requirements. For example, projections are particularly sensitive to the future economic environment, especially with respect to future interest rates and asset returns. Also, GM or Chrysler could make additional voluntary contributions to their plans, or funding rules could be affected by changes in legislation. To strengthen pension funding, the Pension Protection Act of 2006 (PPA) made sweeping changes to plan funding requirements, effective for plan years beginning in 2008. For example, the act included provisions that raised the funding targets for defined benefit plans, reduced the period for “smoothing” assets and liabilities, and restricted sponsors’ ability to substitute credit balances for cash contributions. At the same time, as we have reported previously, the act did not fully close potential plan funding gaps, and it provided funding relief to plan sponsors in troubled industries. In addition, in the face of a weakened economy, the Worker, Retiree, and Employer Recovery Act of 2008 provided plan sponsors with further relief from the changes, as did IRS guidance in 2009 concerning interest rates that could be used to value plan liabilities in some cases. Legislative proposals that would make additional changes to funding requirements are currently being considered. Nevertheless, according to GM’s projections utilizing valuation methods defined under PPA, large cash contributions may be needed to meet its funding obligations to its U.S. pension plans beginning in 2013 (see fig. 4). GM officials told us that cash contributions are not expected to be needed for the next few years because it has a relatively large “credit balance” based on contributions made in prior years that can be used to offset cash contribution requirements that would otherwise be required until that time. As of October 1, 2008, GM had about $36 billion of credit balance in its hourly plan and about $10 billion in its salaried plan. However, once these credit balances are exhausted, GM projects that the contributions needed to meet its defined benefit plan funding requirements will total about $12.3 billion for the years 2013 and 2014, and additional contributions may be required thereafter. In its 2008 year-end report, GM noted that due to significant declines in financial markets and deterioration in the value of its plans’ assets, as well as the coverage of additional retirees, including Delphi employees, it may need to make significant contributions to its U.S. plans in 2013 and beyond. Similarly, Chrysler’s management expects that contributions to meet minimum funding requirements may begin to increase significantly in 2013, but are projected to be relatively minimal until then (see fig. 5). Chrysler, like GM, intends to use credit balances to offset the contribution requirements for some of its plans. As of end-of-year 2009, Chrysler had credit balances of about $3.5 billion for its UAW Pension Plan and about $1.9 billion across the other eight plans for which it provided funding information. In addition, Chrysler also has $600 million in payments from Daimler to help meet its funding requirements over the next few years. Nevertheless, Chrysler’s funding projections reveal that about $3.4 billion in contributions may be needed to meet its funding requirements over the 2009 to 2015 period. In addition, both GM and Chrysler may need to manage the funded status of their plans in order to avoid incurring an “at-risk” status or triggering certain benefit restrictions. If a plan’s funding level falls below certain specified thresholds, then it must use special “at-risk” actuarial assumptions to determine its minimum funding requirements and, in most cases, increase its contributions. For example, the most recent annual funding notice for the GM hourly plan reveals that the plan is in at-risk status for plan year 2008. Also, if a plan’s funding level falls below certain specified thresholds, then certain restrictions may be placed on the benefits provided by the plan, such as lump sum withdrawals and plant shutdown benefits (see table 2). Automaker restructuring, the credit market crisis, and the global recession have created significant economic stress across the auto supply industry. Federal efforts to aid the supply sector through a program that provided GM and Chrysler with funding to guarantee supplier payments benefited the automakers’ top-level direct suppliers, but did little to support component and raw material suppliers. The restructuring of GM and Chrysler amid this difficult economic environment has had a ripple effect throughout the auto supply sector, likely contributing to the recent wave of supplier bankruptcies and pension plan terminations. The auto supply sector is highly dependent on the success of the automakers that it supplies. For years, the auto supply sector has felt the impact of the problems facing the domestic auto market, including declining vehicle sales, and deep production cuts—resulting in overcapacity within the industry. In 2004, the Department of Commerce reported that the possibility of relying on increased auto sales that automatically translate into increased orders and components for U.S. suppliers no longer existed because U.S. automobile manufacturers had shifted from providing a ready market for many domestic suppliers of parts and components to operating on a global basis. The result of this shift was that automotive parts suppliers had to find niches in the global supply chains of U.S. auto companies or their foreign competitors to succeed. Many auto suppliers broadened their sales base to remain competitive. With the domestic share of the market in decline, these suppliers diversified their business models to include just-in-time manufacturing capacity or sold their products to multiple automakers in North America, Europe, and Asia. For example, at the time it filed for bankruptcy, the U.S. auto parts supplier, Delphi Corporation, employed more than 185,000 workers in 38 countries in 2004, making it one of the largest suppliers in the world. Still, according to a 2009 industry report, just 7 of the 29 U.S.- based suppliers listed among the top 100 global suppliers sold the majority of their products in North America. Suppliers serving the large U.S. automakers also have considerable overlap, with as many as 80 percent supplying parts to one or more automaker. For example, Chrysler reported that 96 percent of its top 100 suppliers also served either GM or Ford. Similarly, 27 of GM’s top 39 suppliers also served as major suppliers for Chrysler. While this crossover allowed suppliers to spread their risk among domestic automakers, the impact of the global economic downturn affected many suppliers, and left suppliers that sold primarily to GM and Chrysler particularly vulnerable when the automakers filed for bankruptcy. The recent global credit crisis and the rapid decline in auto sales left many of the nation’s auto parts suppliers under significant stress with limited access to credit and facing growing uncertainty about their future business prospects. For example, GM’s and Chrysler’s decision to slow production by temporarily shutting down some U.S. operations in late 2008 led to interruptions in suppliers’ operations and cash flow. As a result, many suppliers were left with excess inventory, were not paid for products they had shipped to automakers, and lacked the liquidity needed to settle their debts with their raw material and component suppliers. Concerns over the ability of the organizations to continue operations and, among other things, collect their receivables and pay their bills when due, led some suppliers to receive a “going concern” qualification from their auditors. Lenders restricted credit and cash flow to suppliers, limiting their liquidity at the time when it was needed most. With limited cash flow, the suppliers experienced increasing pressure from their raw material and component suppliers. According to Chrysler, 43 percent of its suppliers had received requests from their suppliers for some form of payment term compression. Chrysler recognized the liquidity shortfall in the supplier network as a significant threat to its successful restructuring, and identified supplier insolvencies and supply chain disruptions as key risks to the critical assumptions in its restructuring plan. Another industry report indicated that at least 500 suppliers in North America (or 30 percent of the estimated 1,700 direct suppliers in the U.S.) may be at high risk of insolvency due to the effect of reduced volumes and the lack of credit availability. This credit crunch also affected bankrupt companies, which found securing financing to restructure their companies increasingly difficult. In an effort to help stabilize the auto supply base, in March 2009, also under TARP, Treasury established the Auto Supplier Support Program, which initially dedicated up to $5 billion in government-backed guarantees to GM and Chrysler for supplier payments in order to give suppliers the confidence they needed to keep shipping parts, paying their employees, and continuing operations. Treasury had rejected appeals from the auto supply sector for direct aid to assist a broader portion of the supplier industry because, according to Treasury officials, it had become clear that the vast network of suppliers had to engage in a substantial restructuring and capacity reduction to achieve long-term viability. The program was to ensure that GM and Chrysler received the parts and components they needed to manufacture vehicles and suppliers had access to credit from lenders. Under the program, any supplier that shipped directly to GM or Chrysler on qualifying commercial terms could be eligible to participate. Treasury left it up to the automakers to determine which suppliers qualified for the assistance. According to GM, 74 percent of its 1,300 suppliers were eligible for the program, but only 28 percent of its suppliers (38 percent of its eligible suppliers) received funds under the program. Nearly half of the $947.8 million in program funds that GM dispersed went to 31 of its top 40 suppliers. Shortly after the program began, Treasury reduced the amount of funding available under this program to $3.5 billion, at the request of the automakers. According to Treasury officials, the automakers made this request because conditions had changed: they no longer needed to maintain their prebankruptcy supply capacity, credit markets had opened up, and suppliers’ access to capital had improved. The program, as administered, helped a portion of the industry survive the downturn in production and vehicle sales, but did little to improve supplier access to traditional sources of capital, according to a leading auto supply industry group. The group noted that the program supported suppliers by making funds available to purchase receivables for parts already shipped by participating suppliers, but that many troubled suppliers who had no outstanding debts to the automakers were excluded. According to Treasury officials, the program was not designed to address liquidity for troubled suppliers who were unable to move their inventory and had no receivables, including from GM and Chrysler, due to the extended shutdowns at the manufacturing plants. However, the group also noted that the suppliers who participated in the program were generally satisfied with the outcome, and that the supply sector as a whole believed that without the government’s action, the effect of automakers’ restructuring would have been catastrophic for suppliers. Bankruptcy reorganizations and liquidations occur frequently in the volatile automotive supply sector, but the number of bankruptcies has recently increased. Some suppliers have gone bankrupt multiple times in a decade, while other suppliers have remained in bankruptcy proceedings for years before successfully emerging as a new entity. For example, the “new Delphi” (Delphi Automotive, LLP) emerged in 2009 after the former Delphi had been in bankruptcy proceedings for 4 years. Auto suppliers experienced a rise in the number of bankruptcies, liquidations, and pension plan terminations in 2008 and 2009. In November 2009, a survey by the Original Equipment Suppliers Association (Association)—a leading auto supply industry group—found that a majority of suppliers anticipated a 20 percent decline in their revenue and operating profits on a year-to- year basis. The Association also reported that at least 43 U.S. based auto suppliers had filed for Chapter 11 bankruptcy protection between January and December 2009. Moreover, it was reported that an additional 200 U.S. suppliers had begun the liquidation process by selling off their assets to other suppliers or private equity companies. Chrysler reported that the proportion of its suppliers that were financially troubled had more than doubled, from 10 percent in October 2008 to 22 percent in February 2009, with the troubled suppliers accounting for $6.6 billion of the company’s annual business. In addition, in the summer of 2009, a consultant group estimated that as many as 30 percent of North American suppliers were at high risk of failure. According to Treasury officials, many of Chrysler’s troubled suppliers had difficulty accessing credit because of their concentrated exposure to Chrysler. In the summer of 2009, the auto supply sector was also expected to shrink significantly through mergers and consolidation in order to survive. According to the Association’s survey of its membership in June 2009, auto suppliers were operating at 46.4 percent capacity. In its restructuring plan, Chrysler stated that industry conditions required substantial and coordinated restructuring of the supply base, and that automakers must concentrate their business in “surviving” suppliers. GM projected a 30 percent reduction in the number of suppliers, stating that such compression would allow GM to build and manage a competitive supply base. Several industry consultants noted that the path to long-term viability would require suppliers to reduce their number by 30 to 40 percent and secure more business from Asian and European transplant automakers. However, by early 2010, there were signs that the economic conditions for suppliers may have begun to stabilize. The Association’s January 2010 and March 2010 surveys of its membership reported increased optimism across the sector, especially among larger companies. Many U.S.-based auto suppliers sponsor defined benefit plans that are insured by PBGC. Each company failure could potentially result in PBGC having to assume responsibility for its pension plans, and PBGC officials told us that they are monitoring about 35 large auto suppliers. Even before last year’s restructuring of GM and Chrysler, suppliers (like many other employers) were experiencing significant underfunding of their defined benefit plans. Table 3 shows 18 auto suppliers we identified that reported a combined $14.9 billion in unfunded pension liabilities in 2008. In 2009, several of GM and Chrysler’s suppliers filed for bankruptcy, and in some cases, PBGC intervened and assumed trusteeship of the companies’ defined benefit plans. For example, in July 2009, PBGC terminated and assumed responsibility for the pension plans of 70,000 workers and retirees of the former Delphi Corporation, citing Delphi’s inability to afford to maintain the plans. More specifically, according to PBGC officials, the key factors that led to this action were Delphi’s failure to fund its pensions during bankruptcy, and the company’s imminent sale and liquidation of its assets as it left bankruptcy protection. Other suppliers avoided bankruptcy, but still felt the effects of the slumping auto industry. For example, American Axle and Manufacturing Holdings, Inc., an auto part supplier that narrowly averted bankruptcy in 2009, estimated that the GM and Chrysler factory shutdowns had cost the company $100.6 million in sales and $29.3 million in operating income. While some recent reports have indicated that the outlook for the automakers and suppliers may be improving, the ability of suppliers to fund their defined benefit plans in the future will rest, in part, on the continued viability of the automakers. Moreover, any revival in the auto supply sector may come too late for workers who have already had their pension plans terminated and their benefits reduced to the PBGC benefit guarantee levels. When an underfunded defined benefit plan is terminated, the PBGC bears the costs of any unfunded liabilities up to the guaranteed benefit amounts defined by ERISA, while plan participants bear the loss of benefits beyond these guaranteed amounts that would go unpaid. According to Treasury officials, there is no indication that any of GM’s or Chrysler’s defined benefit plans will be terminated. Nevertheless, to hypothetically examine the potential impact if their plans were to be terminated, we explored how PBGC and plan participants would have been affected had the plans been terminated when these companies filed for bankruptcy in 2009, and the factors at play that could change that picture if the plans were to be terminated 5 years later. Following the termination of an underfunded defined benefit plan, PBGC generally incurs losses that affect its deficit, as well as its resources. With respect to its deficit, the amount of loss to the single-employer fund is equal to the value of the unfunded guaranteed benefits required to be paid under ERISA. Although this is generally considerably less than the total value of unfunded liabilities in a large auto sector pension plan, the loss can still be substantial. With respect to its resources, PBGC must assume responsibility for administering the terminated plan, including continuing benefit payments to retirees, determining the assets and liabilities of the plan as of the date of termination, calculating the guaranteed and nonguaranteed benefit amounts owed each participant in the plan, and keeping participants informed. When plans are large and complex, this can be an enormous task, requiring years to complete. Each year, PBGC assesses its exposure to losses from underfunded pension plans sponsored by financially weak companies. Its estimates of exposure are based on companies with credit ratings below investment grade or that meet one or more of the criteria for financial distress. PBGC classifies the plans sponsored by these companies as “reasonably possible” terminations. At the end of fiscal year 2009, PBGC estimated that its exposure from reasonably possible terminations was approximately $168 billion, up from $47 billion a year earlier. A significant part of this increase was due to the dramatic increase in exposure related to manufacturing, which PBGC attributed primarily to changes in the auto industry, as well as primary and fabricated metals (see fig. 6). In May 2009, PBGC reported that unfunded pension liabilities across the auto industry as a whole totaled about $77 billion as of January 31, 2009, and accounted for about $42 billion of PBGC’s total exposure of $168 billion. This means that, should all the auto industry’s underfunded plans insured by PBGC be terminated and trusteed, PBGC would be required to cover about $42 billion of the benefit amounts promised, adding to its deficit. Between the end of fiscal years 2008 and 2009, the deficit in PBGC’s single-employer insurance program doubled in size from $10.7 billion to $21.1 billion. Should all the underfunded auto industry plans fail, PBGC’s January 2009 estimate indicated that its end of fiscal year 2009 deficit could triple in size. An increase of this magnitude would have implications not just for PBGC’s accumulated deficit, but for its overall funding going forward, as the auto industry is responsible for contributing a significant portion of PBGC’s premiums each year. According to PBGC’s most recent data book, the motor vehicle equipment industry accounted for about 1.2 percent of all insured plans under the single-employer insurance program in 2007, but 6.1 percent of all insured participants and 7.3 percent of all premiums. With respect to PBGC’s exposure for GM’s and Chrysler’s pension plans in particular, PBGC calculated its potential exposure prior to when the new companies assumed sponsorship of the plans. Before the change in sponsorship, PBGC estimated that its exposure for GM’s unfunded guaranteed benefits would be about $9.0 billion, and that its exposure for Chrysler’s unfunded guaranteed benefits would be about $5.5 billion (see table 4). Even without the change in sponsorship, actual losses to PBGC could be substantially different, as estimates of exposure are inherently difficult to calculate. For example, the significant volatility in plan underfunding and sponsor creditworthiness over time makes long-term estimates of PBGC’s expected claims difficult. Moreover, there is a time lag in making these estimates. Estimates of exposure are generally based on company reports filed as of December 31 of the previous year. Thus, the dramatic increase in PBGC’s aggregate reasonably possible exposure between fiscal years 2008 and 2009 depicted in figure 6 was primarily due to the deterioration of credit quality and poor asset returns that occurred during calendar year 2008. Subsequent changes in economic conditions (such as the steady rise in equity returns since March 2009) were not yet reflected in these estimates. In addition, actual losses due to terminated plans depend on PBGC’s liability only for unfunded guaranteed benefits, but this is not factored into the estimates because it is difficult to determine the extent and effect of the limits on guaranteed benefits prior to actual termination. However, PBGC’s exposure for unfunded guaranteed benefits in the auto supply sector has already begun to materialize. Over the past year, the plans of several large suppliers were terminated and trusteed by PBGC, and PBGC estimates that the unfunded guaranteed benefits that it will be required to pay to participants in the plans of these large suppliers will exceed $6.6 billion (see table 5). The estimate for the pension plans of the former Delphi Corporation alone is over $6.2 billion. To help protect against further exposure, according to PBGC’s 2009 annual report, the agency was continuing to monitor the auto industry and negotiate settlements for additional pension protections in several auto- related corporate downsizing cases. For example, in the case of Visteon Corporation, a large automotive supplier, PBGC negotiated an agreement in January 2009 that required Visteon to provide over $55 million in additional protections to workers at closed facilities by making cash contributions to the plan, a letter of credit to PBGC, and a guaranty by certain affiliates of certain contingent pension obligations. Similarly, in the case of Cooper Tire & Rubber Company, PBGC negotiated a deal in August 2009 that required the plan sponsor to strengthen the plan by $62 million, in connection with a plant closing in Albany, Georgia. According to PBGC, such protections can help prevent plan termination or, in the event that the plan does terminate, reduce the losses to the insurance program and participants. If PBGC were to become trustee of GM’s and Chrysler’s auto plans, the impact on its resources would be unprecedented. As illustrated in figure 7, the number of participants and trust fund assets that PBGC is responsible for managing would increase dramatically. Moreover, in addition to their sheer size, these plans have many of the characteristics that contribute to complexity and delays in processing, such as a history of mergers, complicated benefit formulas, movement of participants and assets across plans, and large numbers of participants subject to one or more of the legal limits on guaranteed benefits. Among plans terminated and trusteed by PBGC, the average number of participants per plan is just under 1,000, but most of GM’s and Chrysler’s plans far exceed this average. For example, as of the end of September 2008, GM’s hourly plan had over 500,000 participants, and its salaried plan had nearly 200,000. Based on counts as of the beginning of 2008 (the most recent available), Chrysler’s UAW Plan had about 135,000 participants, and the Chrysler Pension Plan had about 44,000 participants. Only two of Chrysler’s ten plans had less than 1,000 participants. Taken together, the number of participants in these two companies’ pension plans is equal to about 40 percent of all the participants in all the plans terminated and trusteed by PBGC since the agency was established in 1974. Even more striking, taken together, the amount of assets in these two companies’ pension plans exceeds—by a considerable margin—the total amount of assets that PBGC is currently managing for all the plans it has trusteed combined (see fig. 6). In addition to their large size, GM’s and Chrysler’s plans have many of the characteristics that, as delineated in a previous report, contribute to complexity and delay in processing. For example, both GM and Chrysler have long histories of acquisitions, mergers, and divestitures, stretching over the past century (see appendix V). To determine the potential impact on any current or future retirees or beneficiaries of the plan, documentation concerning each change must be obtained, along with data about any affected employees. An employee’s movement from one plan to another also can cause complexity in benefit calculations. Even within a plan, tiers can be created that treat some employees differently and make benefit calculations more complicated. For example, at both GM and Chrysler, different formulas were created for employees based on such things as the date employees began participating in their plans or whether or not they contributed to their plans. Delays also result when PBGC must adjust participants’ benefits to comply with legal requirements. PBGC guarantees participants’ benefits only up to certain limits, specified under ERISA and related regulations. Among GM’s and Chrysler’s plans, certain provisions and characteristics of participants suggest that many would likely be subject to one or more of these limits should the plans be terminated, as discussed further in the next section. Recent changes in the law added new provisions concerning the treatment of certain events, such as plant shutdowns and attrition programs (referred to as “unpredictable contingent events”). PBGC has begun to grapple with some of these complexities following the termination of the Delphi plans, as many of the benefits provided by the Delphi plans reflect negotiations with UAW and are similar to benefits provided by UAW plans across the auto sector. In its 2009 annual report, PBGC noted that it has been taking steps to prepare for the possible trusteeship of large auto industry plans by defining the changes to its infrastructure that would be needed to handle the increase in workload. The types of changes examined as part of this effort included expanded contracts, additional staff, and increased capacity in its information technology system. When ERISA’s guarantees do not cover all pension benefits promised by an underfunded plan that is terminated, those participants whose benefits are reduced share in the losses from the plan’s termination. In many cases involving terminated and trusteed plans, participants’ full benefit amounts are guaranteed and their benefits are not reduced as a result of the termination. But in cases involving complex plans with generous benefit structures such as GM’s and Chrysler’s, large numbers of participants are likely to have benefits subject to the guarantee limits and, depending on the extent of plan underfunding at termination, these participants would be at risk of having their benefits reduced as a result. When PBGC calculated its exposure across the auto sector as a whole in January 2009—prior to the shift in sponsorship of GM’s and Chrysler’s plans to the new companies—PBGC estimated that about $35 billion in unfunded liabilities would be nonguaranteed benefits; that is, plan participants would bear losses for about $35 billion in benefits not funded by the company and not guaranteed by PBGC if all the at-risk underfunded plans across the sector were terminated. Of this $35 billion, about half ($18 billion) was attributable to GM’s plans, and another $5 billion was attributable to Chrysler’s plans. Participants most often affected by the application of guaranteed benefit limits are high earners whose benefits exceed the maximum limit, those who take early retirement, and those whose benefits increased due to recent plan amendments. We were unable to obtain precise data on the number of GM and Chrysler plan participants whose benefits might be reduced due to these limits; however, GM and Chrysler pension plans provide several options for early retirement, with supplemental benefits to those who retire before age 62 as a bridge to Social Security benefits. Under one type of guarantee limit (the accrued-at-normal limit), any supplements being provided to retirees as of the date of plan termination, and any supplements to be provided to future retirees, would not be guaranteed. According to PBGC officials, a significant number of GM and Chrysler participants could be vulnerable to having their benefits reduced due to this limit should the pension plans be terminated. In addition, retirees whose benefits reflect increases in the 5 years prior to the date of plan termination could be subject to another type of guarantee limit (the phase-in limit). For example, if GM’s and Chrysler’s plans had been terminated in 2009, this limit would have affected the increases in benefits provided in the 2007 UAW contracts negotiated with both GM and Chrysler, causing only a part of those increases to be guaranteed. The increases included as benefit enhancements offered as part of recent attrition programs would be subject to the phase-in limit, as well. Although many participants would likely lose some portion of their nonguaranteed benefits if the automakers’ plans were terminated, not all would be at equal risk. This is because when a pension plan is terminated and trusteed by PBGC, ERISA specifies that the remaining assets of the plan and any funds recovered for the plan from company assets be allocated to participant benefits according to a certain priority order (see appendix VI). Due to this allocation process, if GM and Chrysler plans were terminated, participants who were retired (or eligible to retire) for at least 3 years would be most likely to have some or all of their nonguaranteed benefits paid, while those participants who retired early— especially those who retired under one of the special attrition programs— would be most at risk for having their benefits reduced. The exposure to loss from plan termination would shift over time, but it is unclear whether PBGC or plan participants would be better off as a result. Hypothetically, if plans were to terminate 5 years into the future—in 2014 instead of 2009—overall losses could either increase or decrease, and how those losses would be shared between PBGC and plan participants would likely shift as well. For example, plan assets could grow or diminish over time, depending on investment returns and employer contributions. Plan liabilities could also grow or diminish over time, depending on interest rates, ages of participants, and whether benefits are revised in future years. In addition, more participants could acquire vested benefits over time, increasing liabilities; while more benefits would have been paid over time, decreasing liabilities. How the losses due to unfunded benefits would be shared between PBGC (for guaranteed benefits) and plan participants (for nonguaranteed benefits) could also shift over time. For example, participants’ monthly amount of guaranteed benefits would increase over time for three main reasons: (1) more workers would be eligible to retire with more generous benefits, based on years of service; (2) the maximum limits are updated each year and thus would increase, and people would grow older, so the cutbacks due to this limit would grow smaller; and (3) the benefit reductions due to the phase-in limit would be phased out. This increase in the monthly amount of guaranteed benefits would tend to shift costs from participants to PBGC. Meanwhile, over time, more participants will have been retired (or eligible to retire) for 3 years or more, and thus have benefits eligible for higher priority status in the asset allocation process. In addition to shifting the distribution of benefits to be paid among different groups of participants, this could also cause more of the plan’s remaining assets to be allocated to guaranteed benefits within this priority category, with less available to cover nonguaranteed benefits, resulting in a shift in costs from PBGC to plan participants. Taking all these factors into account, it is unclear whether the passage of time would increase or decrease the overall cost of unfunded guaranteed benefits to be paid by PBGC compared with the loss of unfunded nonguaranteed benefits to be borne by plan participants. Clearly, improvements in the financial well-being of the companies and their pension plans would serve the best interests of both PBGC and plan participants. As a result of GM’s and Chrysler’s restructuring, the federal government has assumed new roles vis-à-vis the automakers as part-owner and lender, in addition to its traditional role as pension regulator. On behalf of the U.S. taxpayer, Treasury has an interest, as a shareholder, in the financial well-being of the companies, as well as the viability of their pension plans. These interests may diverge at times. Although Treasury has established policies designed to separate these interests, the perception of a conflict could arise, for example, should choices need to be made regarding the allocation of funds from the companies to their pension plans. Under normal circumstances, transparency and disclosures to the public related to agency actions can often mitigate risks related to conflicts of interest. But, in this case, because this involves private companies and business sensitive information, Treasury is less able to rely on transparency and disclosure in its dealings with the automakers to mitigate any potential conflicts of interest. Nevertheless, as we have previously reported, what Treasury’s goals are for its investment in Chrysler and GM, among other things, is important information for Congress and the public to have. Although Treasury provides public information on TARP activities, including AIFP, through its legally mandated monthly reports to Congress, transaction reports, and others, these reports do not provide information on the indicators Treasury may use in assessing the goals for its auto investments and the status of the automakers’ pensions. Identifying these indicators for Congress, and sharing as much of this information as possible, while still respecting the sensitivity of certain business information, could help Congress and the public better understand whether the investment in the auto companies has been successful and help mitigate potential or perceived conflicts of interest. Recognizing the potential for interested parties to perceive conflicts, Treasury has taken several other steps to mitigate its risk. First, to guide its oversight of the investments going forward and limit its involvement in the day-to-day operations of the companies, Treasury developed four core principles: (1) acting as a reluctant shareholder, for example, by not owning equity stakes in companies any longer than necessary; (2) not interfering in the day-to-day management decisions; (3) ensuring a strong board of directors; and (4) exercising limited voting rights. According to Treasury officials, use of these core principles defines the operating boundaries of the federal role within its ownership context by limiting the reach and ability of the government to exert its powerful influence on the business and operational matters of these companies. Officials noted that the core principle of not interfering in day-to-day decisions has been particularly helpful in dealing with political pressures related to business operations. For example, officials said that Treasury’s auto team received about 300 congressional letters in 2009 regarding day-to-day management issues involving GM and Chrysler. Several of these letters asked about company decisions and strategies, or called on Treasury to exert influence on the companies’ business decisions. Some letters lobbied either in favor of or against a certain practice or activity. Other letters have been passed along on behalf of a particular constituent concern. Treasury officials said that, because of their core principle, most of the time they can simply reply to such letters by reiterating their policy of not getting involved with the companies’ business decisions, and as a result, they have been able to avoid having to respond to these pressures. Second, to implement these core principles, Treasury established a protective barrier between the Treasury officials (beneath the Secretary level) who make policy-related decisions with respect to investments in the automakers and the Treasury officials who are responsible for regulating pensions or overseeing the operations of PBGC. In theory, this barrier prevents Treasury in its role as owner from interacting with Treasury in its role as pension regulator or overseer of PBGC. Treasury officials stated that, in the management of its investment in GM and Chrysler, the Treasury auto team does not communicate with the IRS or PBGC. Given the importance of balancing its competing interests as regulator and part-owner, and mitigating the appearance of conflicts between these interests, it is essential that Treasury ensure that it has an adequate number of staff with the appropriate skills and expertise to carry out its various tasks. Because of earlier reductions in the number of Treasury staff working on the AIFP and Treasury’s stated plans to disband the team focused exclusively on managing Treasury’s stake in the auto industry, we recently recommended that Treasury ensure it has the expertise needed to adequately monitor and divest the government’s investments in Chrysler and GM. We believe that ensuring sufficient staffing continues to be essential, particularly in light of the circumstances discussed here. Subsequent to our making this recommendation, Treasury officials said they hired two additional analysts dedicated solely to monitoring Treasury’s investments in Chrysler and GM, and planned to hire one more. The steps taken to mitigate any risks likely to result should conflicts of interest arise—adoption of the core principles and establishment of a protective barrier—may help, but the tensions inherent in Treasury’s multiple roles remain. This can be illustrated by the conflicting pressures that would likely be brought to bear in two critical and interrelated contexts: (1) how to respond to a decline in pension funding; and (2) how to decide when to sell the government’s shares of stock. Treasury officials told us they expect both GM and Chrysler to return to profitability. If this is the case, and the companies are able to make the required contributions to their pension plans as they become due, then Treasury’s multiple roles are less likely to result in any perceived conflicts. However, if the funding of any of GM’s or Chrysler’s defined benefit plans declines below certain funding levels set out in statute, the company may request a waiver—that is, request permission from IRS (within Treasury) to reduce its required contributions to its plans over an extended period. Despite Treasury’s protective barrier and the autonomy of IRS to grant or refuse such a waiver request apart from any influence from other units within Treasury, some may still perceive a possible tension between Treasury’s interest in the value of its shareholder investment and Treasury’s interest, through its oversight of PBGC, in ensuring the viability of the pension plans. In addition, Treasury has been clear that it wants to divest its shares as soon as practicable, but it must weigh a variety of factors when making the decision about when and how this should happen. Treasury officials said that on the basis of their analysis of the companies’ future profitability, they believe that both GM and Chrysler will be able to attract sufficient investor interest for Treasury to sell its equity. However, circumstances that may appear advisable as to the best time to sell from a shareholder perspective— that is, which would maximize the return on the taxpayer’s investment— could be at odds with the best interests of plan participants and beneficiaries. For example, Treasury could decide to sell its equity stake at a time when it would maximize its return on investment, but when the companies’ pension plans were still at risk. Finally, in the event that the companies do not return to profitability in a reasonable time frame, Treasury officials said that they will consider all commercial options for disposing of Treasury’s equity, including forcing the companies into liquidation, which would likely mean that the companies’ pension plans would be terminated and decisions would need to be made about the allocation of remaining company assets. In such circumstances, although there is a protective barrier preventing Treasury in its role as shareholder from interacting with Treasury in its role overseeing the PBGC, it may be difficult for the agency to make certain decisions without some perceiving a tension between these two separate roles. Treasury’s substantial investment and other assistance, as well as loans from the Canadian government and concessions from nearly every stakeholder, including the unions, have made it possible for Chrysler and GM to stabilize and survive years of declining market share and the deepest recession since the Great Depression. However, because of the ongoing challenges facing the auto industry—including the still recovering economy and weak demand for new vehicles—the ultimate impact that the assistance will have on the companies’ profitability and long-term viability remains uncertain. This, too, is the case for the companies’ pensions. The companies’ ability to make the large contributions that would be required based on current projections is mostly dependent on their profitability. Treasury officials who oversee TARP expect both automakers to return to profitability. Ultimately, much of the automaker recovery is not only dependent on how well the automakers turn their companies around but also how well the overall economy and employment levels improve. The suppliers’ future is even more complex. GM and Chrysler are expected to continue to reduce the number of suppliers that they use going forward. Suppliers have diversified their client base to include many other domestic and international automakers to minimize the impact of such cuts, but this has caused their viability to be more dependent on a global economic recovery, which has been slow. As a result, supplier bankruptcies and pension plan terminations may continue for the near future. In light of these conditions, the risks to PBGC and participants in auto sector pension plans remain significant. PBGC estimated its exposure for unfunded guaranteed benefits across the sector to be about $42 billion as of January 31, 2009, and the exposure for plan participants for unfunded nonguaranteed benefits to be about $35 billion. The federal government and its institutions, the automakers, and the unions have all made a concerted effort to ensure that GM and Chrysler do not fail. But, should the automakers not return to profitability, interests may no longer be aligned. Treasury officials said that they will consider all commercial options for disposing of Treasury’s equity, including liquidation; this would likely mean terminating the companies’ pension plans, and allocating remaining company assets. In such circumstances, it would be difficult for Treasury to make any decisions that would trade off the value of its investment against the expense of the pension funds, potentially exposing the government either to loss of its TARP investment or to significant worsening of PBGC’s financial condition. This is not a choice the government wants to face, but this risk and its attendant challenges remain real. We recently recommended that Treasury should regularly communicate to Congress about TARP activities, including the financial health of GM and Chrysler. This would include information on the companies’ pensions as an integral part of the companies’ financial health. Treasury already provides some information on its investments in the automakers through its monthly reports to Congress. In response to our previous recommendations, Treasury said that it intended to develop an approach for reporting on its investments in the auto industry that strikes an appropriate balance between transparency and the need to avoid compromising the competitive positions of the companies, and that it was implementing a communication strategy to provide key congressional stakeholders more current information about its TARP activities. These reports could provide a vehicle to report publicly available information on the financial status of the automakers’ pensions. Such disclosure could help mitigate the potential or perceived tensions that could arise with the federal government’s multiple roles with respect to the automakers and, when the time comes, could shed light on how Treasury’s decision to divest will impact the companies’ pension plans. We obtained written comments on a draft of this report from the Department of the Treasury (see appendix VIII) and from PBGC (see appendix IX). Treasury generally agreed with our findings, but reiterated the importance of striking an appropriate balance in its public reporting between its goal of transparency and the need to avoid compromising the competitive positions of the companies or its ability to recover funds for taxpayers. Treasury noted that it already provides “a wealth of information” about AIFP on its Web site, and also provides periodic updates to oversight bodies, including GAO. It further noted that it will provide additional reports on its investments in Chrysler and GM as circumstances warrant, but that it will not communicate confidential business information due to the potential to negatively affect the value of the investments. Treasury concluded that, given its role as a shareholder, it would be inappropriate for it to report separately on the assets and liabilities in the automakers’ pension plans to Congress and the public. We understand the importance of protecting the automakers’ proprietary interests. However, as we pointed out in our report, Treasury’s role is multifaceted, serving not only as a shareholder and creditor for Chrysler and GM, but also as a regulator of pensions. As a creditor of these companies, Treasury should know and disclose the pension commitments, which represent liabilities for these companies. These liabilities must be taken into account when evaluating the financial status of these companies. GM and Chrysler are already required to disclose certain information about the status of their pensions in publicly available reports. By including this publicly available information on the status of the automakers’ pension plans in its reports to Congress, Treasury could provide a more complete picture of the companies’ financial health and help mitigate any perceived tensions between the various roles that the Treasury currently plays as shareholder, creditor, and pension regulator without compromising the companies’ competitive positions. Both Treasury and PBGC provided technical comments, which are incorporated into the report where appropriate. In addition, we received technical comments on certain segments of the draft report from GM, Chrysler, and Delphi, and have incorporated their comments where appropriate, as well. We are sending copies of this report to other interested congressional committees and members, the Acting Director of PBGC, the Secretary of Labor, the Secretary of the Treasury, and other interested parties. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact Barbara Bovbjerg at (202) 512-7215 ([email protected]) or A. Nicole Clowers at (202) 512-2843 ([email protected]). Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix X. Both as the former Delphi, prior to bankruptcy, and now as the “new Delphi,” postbankruptcy, the Delphi Corporation has been a leading global supplier of mobile electronics and transportation systems, including powertrain, safety, thermal, controls and security systems, electrical/electronic architecture, and in-car entertainment technologies. Delphi evolved as part of General Motors (GM) until it was spun off as a separate entity in 1999. At the time it filed for Chapter 11 bankruptcy in 2005, the company employed more than 185,000 workers in 38 countries, making it one of the largest suppliers in the world. The former Delphi Corporation sponsored six defined benefit plans for its U.S.-based workers: the Delphi Hourly-Rate Employees Pension Plan; the Delphi Retirement Program For Salaried Employees; the Packard-Hughes Interconnect Bargaining Retirement Plan; the Packard-Hughes Interconnect Non-Bargaining Retirement Plan; the ASEC Manufacturing Retirement Program; and the Delphi Mechatronic Systems Retirement Program. Following Delphi’s spin off from GM in 1999, GM agreed with its unions, including the International Union, United Automobile, Aerospace and Agricultural Implement Workers of America (UAW), to offer pension protections for certain employees in the event that Delphi’s pension plans would be frozen or terminated. Specifically, under the agreement, GM agreed with three unions to provide certain former GM employees retired from Delphi certain pension benefits that would otherwise not be paid by Delphi or by the Pension Benefit Guaranty Corporation (PBGC) upon plan termination. Salaried and certain other union-represented employees did not receive similar contractual commitments from GM with respect to their pensions or other postemployment benefits, and they are suffering the full impact of their Delphi plans having been frozen and terminated. In addition, GM agreed to provide transfer rights for certain Delphi hourly UAW-represented employees in the United States. Specifically, it provided these employees with “flowback” opportunities to transfer to GM as appropriate job openings became available at GM. GM employees in the U.S. had similar opportunities to transfer to Delphi. The original flowback agreement provided that, when an employee transferred, the employee would be eligible for pension benefits which reflected the transferring employee’s combined years of credited service. The parties did not transfer pension assets or liabilities in order to accomplish this. Rather, pension responsibility between Delphi and GM was allocated on a pro-rata basis based upon the employee’s credited service at each company. After Delphi and its U.S. subsidiaries filed for bankruptcy in 2005, there were extensive efforts involving negotiations between Delphi, GM, and other stakeholders to keep the pension plans ongoing. On September 30, 2008, the company froze its salaried plan, the ASEC Manufacturing Retirement Program, the Delphi Mechatronic Systems Retirement Program and the Packard Hughes Interconnect Non-Bargaining Retirement Plan. The company also reached agreement with its labor unions allowing it to freeze the accrual of traditional benefits under its hourly plan, effective as of November 30, 2008. Delphi received the consent of its labor unions and approval from the court to transfer certain assets and liabilities of Delphi’s hourly plan to GM’s hourly plan. The first transfer involved liabilities of approximately $2.6 billion and assets of approximately $486 million (about 90 percent of the estimated $540 million of assets initially scheduled to be transferred). It was anticipated that the remaining assets would be transferred by March 29, 2009, upon finalizing the related valuations. In exchange for the first transfer, Delphi’s reorganization plan released GM from all claims that could be brought by its creditors with respect to, among other things, the spin off of Delphi, any collective bargaining agreements to which the former Delphi was a party, and any obligations to former Delphi employees. Although the first transfer had the effect that no contributions were due under the hourly plan for the plan year ended September 30, 2008, Delphi still had a funding deficiency of $56 million for the salaried plan and an approximate $13 million funding deficiency for its other pension plans for the plan year ending September 30, 2008. Delphi applied to the Internal Revenue Service (IRS) for a waiver of the obligation to make the minimum funding contribution to the salaried plan by June 15, 2009, and requested permission, instead, to pay the amount due in installments over the following 5 years. However, Delphi abandoned the waiver request when it became clear that it could not afford to maintain the salaried plan and that GM was not going to assume it. “ . . . due to the impact of the global economic recession, including reduced global automotive production, capital markets volatility that has adversely affected our pension asset return expectations, a declining interest rate environment, or other reasons, our funding requirements have substantially increased since September 30, 2008. Should we be unable to obtain funding from some other source to resolve these pension funding obligations, either Delphi or the Pension Benefit Guaranty Corporation (the “PBGC”) may initiate plan terminations.” Delphi’s financial difficulties continued, and when the second transfer of pension assets and liabilities to GM was not implemented on July 31, 2009, PBGC terminated all six of Delphi’s U.S. qualified defined benefit plans. PBGC assumed responsibility for the plans on August 10, 2009. According to PBGC, this step was necessary because Delphi had stated that it could not afford to maintain its pension plans and GM, which itself had reorganized in bankruptcy earlier in the year, had stated that it was unable to afford the additional financial burden of the Delphi pensions. PBGC stated that the Delphi pension plans were $7 billion underfunded when they terminated the plans. PBGC estimates that it will make up about $6 billion of that shortfall using PBGC funds. Following PBGC’s takeover of the plans, on October 6, 2009, in accordance with Delphi’s plan of reorganization, the former company sold its U.S. and foreign operations to a new entity, Delphi Automotive LLP, with the exception of four UAW sites in the United States and its steering business, which were sold to GM. PBGC has acknowledged that the calculation of benefits for former Delphi plan participants will be a difficult, lengthy process due to the plans’ complex benefit structures and the availability of documentation for all the mergers and acquisitions that have taken place throughout the life of the plans. On its Web site, PBGC stated that it could take 6 to 9 months from Delphi’s date of trusteeship before it adjusted benefits to estimated PBGC benefit amounts. Moreover, PBGC noted that it could take several years to fully review the plan and finally determine all benefit amounts. To help protect the retirement income of U.S. workers with private sector defined benefit plans, PBGC guarantees participant benefits up to certain limits specified under the Employee Retirement Income Security Act of 1974 (ERISA) and related regulations. These limits include the phase-in limit, the accrued-at-normal limit, and the maximum limit, as illustrated below in figure 8. Appendix III: Recent Attrition Programs at GM and Chrysler are based on actual numbers; data for 2009 are based on projected numbers, across all ten U.S. ualified defined benefit plans, as appropriate. Lump sum payments during 200 paid with pension plan assets; payments before 200 and after 200 paid with company assets. , the retirement age was 5 instead of 55 for certain salaried nonunion employees. Production ceased at the end of August 2009. The New United Motor Manufacturing Incorporated facility (known as “Nummi”) jointly operated by GM and Toyota in Fremont, CA, to close. Production of the last Pontiac model will cease by the end of December 2010. None identified to date. In February 2010, GM announced that the sale of Hummer to Sichuan Tengzhong Heavy Industrial Machinery Co., Ltd. could not be completed and there would be an orderly wind- down of Hummer operations. Currently approximately 850 units of the H3 model are being produced for a fleet customer. H3 production will cease at the end of June 2010. All other Hummer production ceased at the end of September 2009. None identified to date. Production ceased at the end of July 2009. None identified to date. Following Penske Automotive Group’s decision to terminate discussions to acquire Saturn in September 2009, GM announced that it would be winding down the Saturn brand and dealership network. Production ceased at the end of December 2009. None identified to date. Purchased by Spyker Cars, NV, on February 23, 2010. The previously announced wind down of Saab operations has ended. Saab and Spyker will operate under the Spyker (AMS:SPYKR) umbrella, and Spyker will assume responsibility for Saab operations. Total number of assembly, powertrain, and stamping facilities in the United States to be reduced from 47 in 2008 to 34 by the end of 2010 and 33 by 2012. Powertrain castings plant in Massena, NY, closed in May 2009. Stamping plant in Grand Rapids, MI, closed in May 2009. Assembly plant in Wilmington, DE, closed in July 2009. Assembly plant in Pontiac, MI, closed in September 2009. Stamping plant in Mansfield, OH, closed in January 2010. Powertrain engine plant in Livonia, MI, to close by July 2010. Powertrain components plant in Fredericksburg, VA, to close by August 2010. Powertrain plants: Flint North components plant and Willow Run Site, MI; and Parma, OH, components plant to close by August 2010. Stamping plant in Indianapolis, IN, to close by December 2011. Stamping plant and assembly plant in Shreveport, LA, to close by June 2012. Three parts distribution centers closed. Parts distribution centers in Boston, MA; Columbus, OH; and Jacksonville, FL, closed on December 31, 2009. Dodge Magnum and the Chrysler Pacifica, Crossfire, and PT Cruiser convertible. Announced in November 2007 that these four models were to be eliminated from the product portfolio through 2008. Subsequently announced that the PT Cruiser would remain in production. Production at several North American assembly and powertrain plants to be cut, which combined with other actions, was expected to reduce the number of hourly jobs by 8,500 to 10,000 people through 2008. See May 2009 updated list of plant closings provided below in last row of this table. Announced in June 2009 that production would end effective July 10, 2009. St. Louis Assembly Plant North in Fenton, MO. See also below. List of plants scheduled for closing, as of May 2009. St. Louis Assembly Plant South in Fenton, MO, closed October 2008. Assembly plant in Newark, DE, closed in December 2008. St. Louis Assembly Plant North in Fenton, MO, was to close by the end of September 2009. Production to be moved to Warren Truck Assembly plant. Conner Avenue Assembly Plant in Detroit, MI, was to close in December 2009. Stamping plant in Twinsburg, OH, was to close in March 2010. Existing volume to be transferred to Warren Stamping and Sterling Stamping plants. Assembly plant in Sterling Heights, MI; engine plant in Kenosha, WI; and axle plant in Detroit, MI, to close at the end of December 2010. 1908: Acquired Oldsmobile and Reliance Motor Truck Company. 1909: Acquired Cadillac; Oakland Motor Car Company; Rapid Motor Vehicle Company (later renamed GMC Truck); and Champion (later renamed AC Spark Plug Company). 1918: Acquired McLaughlin Motor Company (later renamed General Motors of Canada) and United Motor Corporation. 1919: Acquired Fisher Body; Dayton Wright Company; Guardian Frigerator (later renamed Frigidaire); and Saginaw Malleable Iron Company (renamed Saginaw Products Company). 1925: Acquired Vauxhall Motors, Ltd., based in Luton, England. Founded June 6, 1925. 1928: Acquired Dodge. 1929: Acquired Adam Opel Corporation, located in Rüsselsheim, Germany; and Allison Engineering Company. 1930: Acquired Electro-Motive Engineering Corporation. 1931: Acquired Holden’s Motor Body Builders Limited; merged with GM’s Australia Proprietary, Limited, to form Holden’s Limited, located in Melbourne, Australia. 1933: Acquired a controlling interest in North American Aviation; merged with GM’s General Aviation division. 1953: Acquired Euclid, Inc. 1957: Acquired Ensamblaje Venezolana, soon renamed Chrysler de Venezuela S. A. 1959: Acquired Chrysler South Africa Ltd. 1968: Sold most of Euclid; renamed remaining facilities the Terex Division. 1963: Acquired Chrysler Hellas S. A., Greece. 1965: Acquired the outboard engine business of West Bend Company of Hartford, Wisconsin and the Lone Star Boat Company of Plano, Texas, forming the Chrysler Boat Corporation. 1967: Acquired Redisco, Inc., from American Motors Corporation and integrated it with Chrysler Credit to form Chrysler Financial Corporation. Also acquired 77 percent of Barreiros Diesel S. A. (Spain), and increased interest in Chrysler do Brasil (Brazil) to 92 percent. 1973: Merged Allison Engineering with Detroit Diesel. 1970: Control of Rootes Group equity reached 73 percent; the company renamed Chrysler United Kingdom Ltd. 1976: Sold the Airtemp Division to Fedders Corporation. 1978: Sold the Chrysler Europe Division. 1981: Sold Terex Division. 1980: Sold the Marine Division. 1984: Acquired Electronic Data Systems Corporation. 1981: Sold the Defense Division to General Dynamics. 1985: Acquired Hughes Aircraft Company; merged with Delco Electronics to form a new subsidiary called Hughes Electronics. 1984: Reorganized into a holding company that included Chrysler Motors, Chrysler Financial, Gulfstream Aerospace and Chrysler Technologies. 1988: Spin off of Detroit Diesel. 1987: Acquired American Motors Corporation (and Jeep) for $800 million. 1989: Purchased 50 percent equity in Saab Automobile AB of Sweden; later purchased the remaining 50 percent to become sole owner in 2000. 1993: Sold Allison Gas Turbine. 1996: Sold Electronic Data Systems Corporation. 1998: Merged with Daimler-Benz AG; operated as “Chrysler Group,” a business unit of DaimlerChrysler AG. 1997: Sold Hughes Aircraft to Raytheon. 1999: Spin off of Delphi; acquired exclusive rights to the Hummer brand name from AM General Corporation. 2002: Acquired the bulk of Korean automaker Daewoo Motor’s automotive assets and created a new company called GM Daewoo Auto & Technology. 2007: Just over 80 percent of Chrysler and its related financial services business sold to Cerberus Capital Management for $7.4 billion. 2003: Sold Hughes Electronics. 2008: Spin off of Chrysler Financial Corporation. 2005: Sold Electro-Motive Diesel. 2006: Divested majority ownership in its financing unit, General Motors Acceptance Corporation (now known as GMAC). 2007: Sold Allison Transmission. 2009: Acquired five U.S.-based components plants from Delphi. When a pension plan is terminated and trusteed by PBGC, ERISA specifies that the remaining assets of the plan and any funds recovered for the plan during the bankruptcy proceedings be allocated to participant benefits according to six priority categories (see table 10). Funds recovered from bankruptcy proceedings are also allocated using these priority categories, but unlike plan assets, recoveries are required to be shared between participants’ unfunded nonguaranteed benefits and PBGC’s costs for unfunded guaranteed benefits. As a result, recoveries are often more advantageous for participants than residual plan assets. PBGC allocates the participants’ portion of the recoveries beginning with the highest priority category in which there are unfunded nonguaranteed benefits, and then to each lower priority category, in succession. PBGC prepared example benefit calculations to illustrate how termination of the automaker pension plans might impact participant benefits, depending on the participant’s situation (see table 11). The calculations assume that plan assets and recoveries are not sufficient to fund nonguaranteed benefits beyond a portion of those benefits in priority category 3 (that is, of those retired or eligible to retire for at least 3 years), and they focus on those who would lose the most under such situations. Although an early retiree eligible for priority 3 status would lose the least, all early retirees under age 62 as of the date of plan termination would lose a sizeable portion of their benefits until age 62 because their supplements are not guaranteed. The person who retired early under a special attrition program or plant shutdown benefit would lose even more, as the enhanced benefits under the special program would also not be guaranteed, reducing the person’s lifetime benefit by more than half. Finally, the person not yet eligible to retire would lose the most. Compared to the benefits promised under the plan, he would not be able to retire for 5 more years and his payment would be less than a quarter of the amount promised. Over time, in general, more employees will be eligible to retire and qualify for priority 3 status, and the amount of retirees’ monthly guaranteed benefits will increase. In addition to the contacts named above, Kimberley M. Granger and Raymond Sendejas, Assistant Directors; Charles J. Ford, Jonathan McMurray, Margie K. Shields, Sarah A. Farkas, Heather Halliwell, and Joseph A. Applebaum made significant contributions to this report. James Bennett, Jessica A. Botsford, Orice Williams Brown, Susannah L. Compton, Shannon K. Groff, Cheryl M. Harris, Susan J. Irving, Charles A. Jeszeck, Gene G. Kuehneman, Christopher D. Morehouse, Michael P. Morris, Robert Owens, Roger J. Thomas, and Craig H. Winslow also made important contributions. Troubled Asset Relief Program: The U.S. Government Role as Shareholder in AIG, Citigroup, Chrysler, and General Motors and Preliminary Views on its Investment Management Activities. GAO-10-325T. Washington, D.C.: December 16, 2009. Troubled Asset Relief Program: Continued Stewardship Needed as Treasury Develops Strategies for Monitoring and Divesting Financial Interests in Chrysler and GM. GAO-10-151. Washington, D.C.: November 2, 2009. Pension Benefit Guaranty Corporation: Workers and Retirees Experience Delays and Uncertainty when Underfunded Plans Are Terminated. GAO-10-181T. Washington, D.C.: October 29, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-1048T. Washington, D.C.: September 24, 2009. Pension Benefit Guaranty Corporation: More Strategic Approach Needed for Processing Complex Plans Prone to Delays and Overpayments. GAO-09-716. Washington, D.C.: August 17, 2009. Troubled Asset Relief Program: June 2009 Status of Efforts to Address Transparency and Accountability Issues. GAO-08-658. Washington, D.C.: June 17, 2009. Pension Benefit Guaranty Corporation: Financial Challenges Highlight Need for Improved Governance and Management. GAO-09-702T. Washington, D.C.: May 20, 2009. Auto Industry: Summary of Government Efforts and Automakers’ Restructuring to Date. GAO-09-553. Washington, D.C.: April 23, 2009. Troubled Asset Relief Program: March 2009 Status of Efforts to Address Transparency and Accountability Issues. GAO-09-504. Washington, D.C.: March 31, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-296. Washington, D.C.: January 30, 2009. Auto Industry: A Framework for Considering Federal Financial Assistance. GAO-09-242T. Washington, D.C.: December 4, 2008. Troubled Asset Relief Program: Additional Actions Needed to Better Ensure Integrity, Accountability, and Transparency. GAO-09-161. Washington, D.C.: December 2, 2008. Pension Benefit Guaranty Corporation: Improvements Needed to Address Financial and Management Challenges. GAO-08-1162T. Washington, D.C.: September 24, 2008. Defined Benefit Pensions: Plan Freezes Affect Millions of Participants and May Pose Retirement Income Challenges. GAO-08-817. Washington, D.C.: July 21, 2008. PBGC Assets: Implementation of New Investment Policy Will Need Stronger Board Oversight. GAO-08-667. Washington, D.C.: July 17, 2008. Pension Benefit Guaranty Corporation Single-Employer Insurance Program: Long-Term Vulnerabilities Warrant “High Risk” Designation. GAO-03-1050SP. Washington, D.C.: July 23, 2003.
Over $81 billion has been committed under the Troubled Asset Relief Program (TARP) to improve the domestic auto industry's competitiveness and long-term viability. The bulk of this assistance has gone to General Motors (GM) and Chrysler, who sponsor some of the largest defined benefit pension plans insured by the federal Pension Benefit Guaranty Corporation (PBGC). As part of GAO's statutorily mandated oversight of TARP, this report examines: (1) the impact of restructuring on GM's and Chrysler's pension plans; (2) the impact of restructuring on auto supply sector pension plans; (3) the impacts on PBGC and plan participants should auto industry pension plans be terminated; and (4) how the federal government is dealing with the potential tensions of its multiple roles as pension regulator, shareholder, and creditor. To conduct this study, GAO interviewed officials at GM, Chrysler, a labor union, a supplier association, the Departments of the Treasury and Labor, and PBGC; and reviewed relevant statutes, reports, and documents concerning the automakers' restructuring and pension plan funding. Treasury and PBGC generally agreed with the report's findings. Their technical comments and the technical comments provided by GM, Chrysler, and Delphi, were incorporated as appropriate. The new GM and the new Chrysler that were established during each company's bankruptcy process in the summer of 2009 assumed sponsorship for all the old companies' U.S. defined benefit plans. Although the pension plans have been maintained, their future remains uncertain. According to current company projections, large contributions may be needed to comply with federal pension funding requirements within the next 5 years. Officials at the Department of the Treasury, which oversees TARP, expect both GM and Chrysler to return to profitability. If this is the case, then the companies will likely be able to make the required payments and prevent their pension plans from being terminated. However, if GM and Chrysler were not able to return to profitability and their pension plans were terminated, PBGC would be hit hard both financially and administratively. In early 2009, prior to the new companies assuming sponsorship, PBGC estimated that its exposure to potential losses for GM's and Chrysler's plans to be about $14.5 billion. Meanwhile, automaker downsizing and the credit market crisis have created significant stress for suppliers and their pensions. During 2009, there was a rise in the number of supplier bankruptcies, liquidations, and pension plan terminations. In July, the nation's largest auto parts supplier, Delphi Corporation, terminated its pension plans with expected losses to PBGC of over $6.2 billion. Across the auto sector as a whole, in January 2009, PBGC estimated that unfunded pension liabilities totaled about $77 billion, with PBGC's exposure for potential losses due to unfunded benefits of about $42 billion, leaving plan participants to bear the potential loss of the $35 billion difference through reduced benefits. Moreover, until Treasury either sells or liquidates the equity it acquired in each of the companies in exchange for the TARP assistance, its role as shareholder creates potential tensions with its role as pension regulator and overseer of PBGC in its role as pension insurer. In particular, tensions could arise if decisions must be made between allocating funds to company assets (thereby protecting shareholders, including taxpayers) or to pension fund assets (thereby protecting plan participants). As GAO reported previously, better communication with Congress and others about TARP interests could help mitigate such tensions.
CMS, within the Department of Health and Human Services (HHS), provides operational direction and policy guidance for the nationwide administration of the Medicare program. It contracts with private organizations—called carriers and fiscal intermediaries—to process and pay claims from Medicare providers and perform related administrative functions. Twenty-three carriers nationwide make claims payments for physician services, which are covered under part B of Medicare. In addition, carriers are responsible for implementing controls to safeguard program dollars and providing information services to beneficiaries and providers. To ensure appropriate payment, they conduct claims reviews that determine, for example, whether the services physicians have claimed are covered by Medicare, are reasonable and necessary, and have been billed with the proper codes. Carriers employ a variety of review mechanisms. Automated checks, applied to all claims, are designed to detect missing information, services that do not correspond to a beneficiary’s diagnosis, or other obvious errors. They may also be used to determine if a claim meets other specific requirements, including national or local coverage policies (such as allowing only one office visit for an eye examination per beneficiary per year unless medical necessity is documented). Manual reviews by carrier staff are used when the review of a claim cannot be automated to determine if sufficient information has been included to support the claim. In the most thorough type of manual claims review, a carrier’s clinically trained personnel perform a medical review, which involves an examination of the claim along with the patient’s medical record, submitted by the physician, to determine compliance with all billing requirements. Typically, carriers conduct medical reviews on claims before they are paid, by suspending payment pending further examination of the claim. Prepayment medical reviews help to ensure that a carrier is making appropriate payment decisions while the claims are processed, rather than later trying to collect payments made in error. To target such reviews, carriers develop “edits”—specific criteria used to identify services that the carrier determines to have a high probability of being billed in error. Carriers develop these edits based on data analyses that include comparisons of local and national billing patterns to identify services billed locally at substantially higher rates than the national norm. Carriers may also develop edits for prepayment medical review based on other factors, such as CMS directives or individual physicians or group practices the carrier has flagged for review based on their billing histories. Before putting edits into effect, CMS expects the carriers to conduct targeted medical reviews on a small sample of claims in order to validate that the billing problem identified by the carrier’s data analysis or other sources does actually exist. In addition to prepayment medical reviews, carriers conduct some medical reviews after claims are paid. Postpayment reviews determine if claims were paid in error and the amounts that may need to be returned to the Medicare program. They focus on the claims of individual physicians or group practices that have atypical billing patterns as determined by data analysis. Such analyses may include comparisons of paid claims for particular services to identify physicians who routinely billed at rates higher than their peers. Carriers may also select claims for postpayment review based on other factors, such as information derived from prepayment reviews, referrals from other carrier units, and complaints from beneficiaries. In rare cases, postpayment reviews may result in referrals to carrier fraud units. Each year, as part of their budget negotiations with CMS, carriers develop medical review strategies that include workload goals for conducting medical reviews. CMS provides each carrier with an overall budget for claims review. The carriers then submit for CMS approval their workload goals for specific activities, such as the number of prepayment and postpayment medical reviews they plan to conduct, along with proposed budgets and staff allocations across these activities. In addition, the carriers submit budget proposals for provider education and training related to issues identified in medical review. CMS requires the carriers to reassess the allocation of these resources among review and educational activities during the course of the year and, with CMS approval, to shift resources as appropriate to deal with changing circumstances. In estimating the prevalence of medical reviews, data from the three carriers in our study show that more than 90 percent of physician practices—including individual physicians, groups, and clinics—did not have any of their claims selected for medical review in fiscal year 2001, and for those that did, relatively few claims were subject to review. A small proportion of physician practices served by the three carriers had any claims medically reviewed during fiscal year 2001. Table 1 shows that about 10 percent of the solo and group practices that filed claims with WPS had any prepayment medical reviews. This proportion was even lower at HealthNow NY and NHIC California, with rates of about 4 and 7 percent, respectively. The share of physician practices with postpayment reviews by any of these carriers was much smaller; approximately one tenth of 1 percent of practices had claims selected for medical review after payment had been made. Further, for most of the physician practices having any claims subject to medical review in fiscal year 2001, the carriers examined relatively few claims. As shown in table 2, over 80 percent of the practices at each carrier whose claims received a prepayment review had 10 or fewer claims examined and about half had only 1 or 2 claims reviewed. For the small number of physician practices whose claims were subject to postpayment review in fiscal year 2001, the three carriers typically examined more claims per practice. At NHIC California, the median physician practice had 33 claims reviewed postpayment; at WPS, 49; and at HealthNow NY, 31. With the issuance of the PCA initiative, CMS modified the approach that carriers use to select physicians’ claims for medical review, determine repayments due, and prevent future billing errors. PCA directs carriers to (1) use their analyses of physician billing patterns to better focus their medical review efforts towards claims with the greatest risk of inappropriate payments, and (2) provide targeted education regarding how to correct billing errors. Information from our three carriers indicates that, as a result of PCA, they virtually eliminated in fiscal year 2001 their use of extrapolation, a corrective action that involves projecting a potential overpayment from a statistical sample. A recent CMS survey also showed reduced use of extrapolation by other carriers. After PCA was implemented, the highest repayment amounts each of our three carriers assessed physicians were substantially lower than in the previous year. The carriers have also developed medical review strategies that include increased education for individual physicians in an effort to change billing behavior and, thus, prevent incorrect payments. PCA seeks to more effectively select physician claims for medical review. The initiative aims to further the agency’s program integrity goals of making sure that claims are paid correctly and billing errors are reduced while carriers maintain a level of medical review consistent with their workload agreements with CMS. In targeting physician claims, PCA requires that carriers subject physicians only to the amount of medical review necessary to address the level and type of billing error identified. If claims data analysis shows a potential billing problem for a particular service, carriers must first conduct a “probe review”—requesting and examining medical records from a physician for a limited sample of claims—to validate suspicions of improper billing or payment. For example, a carrier may initiate a postpayment probe review after discovering that a physician billed, per patient, substantially more services than his or her peers. If the carrier determines that the documentation in the medical records does not support the type or level of services that was billed, the carrier calculates an error rate—the dollar amounts paid in error relative to the dollar amount of services reviewed. The error rate, the dollar value of the errors, and the physician’s past billing history are among the factors the carrier may consider in assessing the level of the billing errors and determining the appropriate response. Under PCA, CMS instructs carriers to categorize the severity of billing errors found in probe samples into three levels of concern—minor, moderate, or major. Minor concerns may include cases with a low error rate, small amounts improperly paid, and no physician history of billing problems. Moderate concerns include cases that have a low error rate but substantial amounts improperly paid. Major concerns are cases with a very high error rate, or even a moderate error rate if the carrier had previously provided education to the physician concerning the same type of billing errors. Although no numerical thresholds were established in the instructions to carriers, CMS provided vignettes illustrating the various levels of concern. In an example of a major concern, 50 percent of the claims in a probe sample were denied, representing 50 percent of the dollar amount of the claims reviewed. PCA allows carriers flexibility in determining the most appropriate corrective action corresponding to the level of concern identified. At a minimum, the carrier will communicate directly with the provider to correct improper billing practices. For probe reviews that are conducted postpayment—the stage at which probe reviews are most commonly done at the three carriers we visited—they must also take steps to recover payment on claims identified as having errors. Further options for corrective action include: for minor concerns, conducting further claims analysis at a later date to ensure the problem was corrected; for moderate concerns, initiating prepayment medical review for a percentage of the physician’s claims until the physician demonstrates compliance with billing procedures; and for major concerns, initiating prepayment medical review for a large share of claims or further postpayment review to estimate and recover potential overpayments by projecting an error rate for the universe of comparable claims—a method of estimation called “extrapolation.” Under PCA, because the corrective action is scaled to the level of errors identified, the potential financial impact of medical review on some physicians has decreased. Although our three carriers did not frequently use extrapolation in 2000, before PCA, a physician could experience a postpayment medical review that involved extrapolation regardless of the level of errors detected. As shown in table 3, after PCA’s implementation, the highest amount any physician practice was required to repay substantially declined at the three carriers. The largest overpayment assessed across the carriers ranged from about $6,000 to $79,000 in fiscal year 2001, compared with about $95,000 to $372,000 in the previous year. At the same time, changes in the median overpayment amounts varied across our three carriers, with a dramatic decline at NHIC California. (Recovery of overpayments from physicians is discussed in app. II.) Several factors may account for the lower overpayment amounts assessed physician practices in fiscal year 2001. Under PCA, probe samples are designed to include a small number of claims per physician, so any overpayments discovered through the probe review process will likely be limited. Whereas the typical postpayment medical review conducted before PCA might involve several hundred claims, a probe review generally samples 20 to 40 claims selected from an individual physician for the time period and the type of service in question. If the carrier classifies the physician’s billing problem as a minor or moderate level of concern, the physician is responsible for returning only the amount paid in error found in the probe sample. In these cases, there would not be an extrapolation as may have occurred in the past. The circumstances in which carriers determine an overpayment by extrapolating from a statistical sample have narrowed. Before PCA was implemented, carriers were encouraged to extrapolate an overpayment amount whenever a postpayment sample of claims was drawn. However, even then, our three carriers used extrapolation in only 38 instances in fiscal year 2000. Now CMS has directed carriers to reserve the use of extrapolation for those cases where a major level of concern has been identified. In addition, before it can proceed with an extrapolation, the carrier has to draw a new, statistically valid random sample from which to project the assessed overpayment. Furthermore, the amount to be recovered based on an extrapolation is smaller than it typically would have been in years past because instead of using the average overpayment found in the sample, the average is reduced because statistical estimates do not have 100 percent accuracy. In the event that extrapolation is used, the requirement to start with probe samples may also reduce the physician’s financial risk. Because a probe sample is fairly small, carrier officials stated that they may only examine one or two types of services, compared to four to six types of services reviewed previously. This means that if the probe review results lead to an extrapolation based on a larger statistically valid random sample, only claims for the small number of service types will be included in that sample and the results will be projected to a smaller universe of claims. Consequently, the total amount assessed would tend to be smaller than previously extrapolated amounts. In the first year of PCA implementation, our three carriers virtually eliminated their use of extrapolation to determine overpayments. For example, NHIC California officials stated that before PCA it was not uncommon to use extrapolation in determining overpayments based on samples involving a relatively large number of claims. But now, such extrapolation is to be used infrequently. If a physician failed to correct inappropriate billing practices following a probe sample and targeted education, the carrier would probably subject some or all of the physician’s subsequent Medicare billing for prepayment review before it would consider selecting a larger postpayment sample suitable for extrapolation. As shown in table 4, in fiscal year 2000, NHIC California conducted 31 postpayment reviews that involved extrapolation, with a median overpayment assessment of about $32,000, but had no cases involving extrapolation in fiscal year 2001. Similarly, HealthNow NY had none in fiscal year 2001 and WPS reported no cases of extrapolation other than a small number of consent settlement cases. A recent CMS survey indicates that most carriers limit their use of extrapolation. In October 2001, CMS surveyed carriers to determine, in part, the number of cases that involved extrapolation during the last 3 fiscal years. Of the 18 carriers that responded to the survey, only 3—serving Ohio, West Virginia, Massachusetts, and Florida—had more than 9 cases involving extrapolation in fiscal year 2001. A key focus of PCA is its emphasis on carrier feedback to physicians in the medical review process. Educating physicians and their staffs about billing rules is intended to increase correct billing, which reduces both inaccurate payments and the number of questionable claims for which physicians may be required to forward copies of patient medical records. When a carrier identifies a physician’s billing problem, PCA requires the carrier to provide data to the physician about how his or her billing pattern varies from other physicians in the same specialty or locality. For issues that affect a large number of providers, CMS recommends that carriers work with specialty and state medical societies to provide education and training on proper billing procedures. In response to PCA, two of the three carriers planned substantial increases in their spending for education and feedback to physicians on medical review issues as part of their overall medical review strategies for fiscal year 2002. As shown in table 5, the three carriers had budget increases of various sizes for provider education and training related to medical review. As part of their strategies to increase physician education, the three carriers reported that they were making greater use of phone calls and individualized letters to physicians’ offices to notify them about billing errors. Carriers record their contacts using physician tracking systems to check on the education that has been provided to the physician, which can include letters, materials, phone calls, or face-to-face visits. Whereas in the past it was common for carriers to simply point physicians toward the applicable Medicare rules, under PCA they have assisted physicians in interpreting the rules and applying them to specific billing situations. The carrier’s medical review staff has addressed problems of questionable billing patterns by contacting physicians by phone to provide specific information pertaining to billing rules. For physicians whose claims are undergoing postpayment review, the carrier sends a letter at the completion of the medical review that provides a description of the billing problems found, including, as needed, information on the relevant national and local medical policies. The letter also identifies a contact person at the carrier, should the physician want additional information about billing or documentation issues. For example, WPS officials acknowledged that they previously had little or no follow-up with physician practices whose claims were denied or reduced after medical review to make sure they understood how to bill correctly. In fiscal year 2001, WPS began providing additional education— some efforts addressing all Medicare physicians and some targeted to providers in specific specialties or service locations. To identify the groups that would most benefit from targeted education, the carrier developed benchmark data on billing errors using aggregate claims data on utilization, denial rates, and other billing patterns. For example, the carrier developed education campaigns targeted to mental health practitioners, such as psychologists, clinical social workers, and psychiatrists. In fiscal year 2001, WPS also began to conduct on-site education and group meetings and contact specialty associations to disseminate further information. In addition to concerns about having their claims selected for medical review, some physicians have expressed dissatisfaction with the accuracy of the carrier medical review decisions concerning the medical necessity, coding, and documentation of physician services billed to Medicare. To assess the appropriateness of clinical judgments made by carriers’ medical review staff, we sponsored an independent evaluation by the private firm that monitors claims payment error rates as a Medicare program safeguard contractor. The firm found that our three carriers made highly accurate medical review decisions. In addition, the level of accuracy was highly consistent across the three carriers. Slight variation in the degree of accuracy was evident when the claims reviewed were classified by the type of payment decision: to pay the claim in full, to pay a reduced amount, or to fully deny payment. The independent review was conducted on samples of 100 physician claims from each carrier selected randomly from all claims undergoing either prepayment or postpayment medical review in March 2001. Nurse reviewers examined the carrier’s initial review decision to see if it was supported by the available medical record documentation and carrier policies in effect when the carrier made its payment decision. These reviewers then discussed with the carrier’s staff each claim where they had come to a different conclusion, and in all but one instance, the carrier and contractor achieved a consensus as to whether the original carrier decision was in error. The acting deputy director of CMS’s Program Integrity Group, a physician, decided the accuracy of the one case that remained in dispute. For the vast majority of claims, the independent reviews validated the carriers’ decisions. As shown in table 6, the independent reviewers agreed with carriers’ original assessments in 280 of the 293 cases examined, or about 96 percent of the time. The small share of inaccurate decisions made by the carrier resulted in both overpayments and underpayments to physicians. There was slight variation in the accuracy of carrier medical review decisions for different types of payment determinations that resulted from the carriers’ initial review. The independent reviewer found that carrier decisions to completely deny payment were the most accurate. In our sample, only 1 of the 64 carrier decisions (1.6 percent) to fully deny a claim was determined to be a medical review error. Carrier decisions to reduce payment amounts were slightly less accurate. The independent reviewers (with subsequent concurrence by the carriers) found errors in 5 of 59 claims (8.5 percent) that the carriers had initially decided to pay at a reduced amount. In one instance, the independent reviewer determined that the carrier should have denied the claim altogether; for the other 4 claims, it judged that the carrier should have made a smaller reduction or paid the claim in full. Three of the five instances in which the independent reviewer questioned the carrier’s decision to reduce the amount paid involved claims for physicians’ evaluation and management (E&M) services—commonly known as physician visits or consultations. The coding system used for billing much of physician care has five separate levels of evaluation and management service intensity, each linked to a distinct payment amount. In order to assess the appropriateness of a claim’s billing level, reviewers have to find specific information in the submitted clinical documentation on, among other factors, the breadth of the medical history taken, the scope of the physical examination conducted, and the complexity of the decisions made by the physician. According to CMS officials, one reason medical review decisions for these claims are likely to raise questions is that the different levels along these key dimensions are not clearly defined, such as distinguishing between “straightforward” and “low” complexity in medical decision making. Such reviews are also complicated by CMS’s instruction to the carriers that they may use either the guidelines for billing evaluation and management services issued in 1995 or the ones issued in 1997, depending on which set is most advantageous to the physician. Another factor contributing to the difficulty in medically reviewing E&M claims is the broad variability in style and content found in the medical records. Carrier officials noted that some physicians meticulously document exactly what they have observed and done while others tend to be less complete and careful. Reviewers are likely to vary in what they infer from the less complete records, which, in turn, can lead to different conclusions as to whether a case is of low, medium, or high complexity. Although the carriers in our study were highly accurate in making payment determinations, they can improve their process for selecting claims for medical review that are most likely to contain billing errors. Our data show that, in fiscal year 2001, there was variation in the performance of edits—criteria used to target specific services for review—that our three carriers employed to identify medically unnecessary, or incorrectly coded, physician services. Carriers have difficulty establishing edits that routinely select claims with the greatest probability of errors because they have to rely, to some degree, on incomplete data. Also, CMS’s oversight of the carriers does not include incentives to develop and use more refined edits. CMS has limited its involvement in this area to collecting data from the carriers on the results of reviews selected by the edits and setting general expectations for the carriers to assess the effectiveness of the edits that they use. Carriers receive no feedback on the edit effectiveness data that they have reported to the agency and little guidance as to how they could maximize the effectiveness of their procedures to select physician claims for medical review. To help reduce local billing problems, carriers usually decide on their own which claims to select for medical review. They generally develop edits by (1) analyzing claims data to identify services or providers where local billing rates are substantially higher than national averages, and (2) selecting a small probe sample of such claims for medical review to substantiate the existence of a billing problem. Other edits are designed to ensure that physicians adhere to local medical review policies—rules that describe when and under what circumstances certain services may be covered. Claims identified by the edits are suspended, that is, temporarily held back from final processing, and the physicians involved are contacted to request the relevant medical records. Once those records arrive, claims examiners determine whether the claim should be paid in full, reduced, or denied. Of the total number of prepayment edits related to physician services used at each carrier (36 edits at WPS in each of its two largest states; 18 at NHIC’s Northern California office, and 7 at HealthNow NY), 27 identified the large majority of claims undergoing medical review in fiscal year 2001. Specifically, 10 or fewer edits at each of the carriers suspended more than three-fourths of the claims medically reviewed prior to payment. In order to assess the relative effectiveness of those edits, we drew on data that the carriers recorded on the results of reviews initiated by each edit in effect during that period. These data included information on the proportion of suspended claims that were reduced or denied as a consequence of medical review, and the average dollar reduction for those claims that were not paid in full. Edits would be considered better targeted if they have (1) a higher rate of claims denied or reduced, or (2) a larger average amount of dollars withheld from payment for an inappropriately billed service. The strongest case could be made for edits that did well on both dimensions, and the weakest case would apply for those edits that ranked low on both denial rate and average amount withheld. Figure 1 shows the results of this analysis for the 27 prepayment edits that accounted for the largest number of claims suspended by each of our three carriers. The four bars indicate the number of edits achieving different levels of denial (or reduction) rates. The grouping with the largest number of edits, 11, represents the lowest level of effective targeting, between 5 and 19 percent. Two thirds of the edits, 18, have denial rates under 40 percent. By contrast, 6 edits have denial rates of between 60 and 82 percent. The segments within the bars indicate the average dollar amount reduced or denied when either occurs. Only 3 of the 11 major edits in the lowest denial rate group generated relatively large program savings—an average of $200 or more—for those claims that were reduced or denied. An equal number, and larger proportion, of edits in the highest denial rate group also produced savings exceeding $200 per claim. The wide variation among these 27 major edits across both the dimensions of denial rate and average dollar amount denied or reduced suggests that there is room for improvement. CMS requires the carriers to periodically evaluate the effectiveness of the edits they use to ensure that each has a reasonable denial rate and dollar return. However, CMS has not provided guidelines to the carriers as to how such evaluation should be conducted, or what minimum level of performance they should strive for with respect to denial rates, average dollar reductions, or other measures of efficiency. Moreover, officials at the three carriers indicated that they did not receive feedback from CMS regarding the performance of their edits, even though the carriers submit quarterly reports to the agency on the performance of their most active edits. CMS’s involvement in this area was generally limited to ensuring that carriers had their own process in place for evaluating prepayment edits. The three carriers tend to consider similar variables in evaluating edit effectiveness, but vary quite a bit in the procedures that they follow to make that assessment. In general, all three carriers consider factors such as the number of claims suspended, the denial rate, dollar savings, and the overall magnitude of the potential billing problem. With respect to process, HealthNow NY did not have any explicit procedure to evaluate edits until the end of fiscal year 2001. At that point it adopted a detailed scoring system with numeric thresholds that determine when to discontinue using a edit. The other carriers continue to rely less on quantitative measures and more on the professional judgment of medical review staff in evaluating prepayment edits. Several factors contribute to the continued use of poorly targeted edits. Some of the carriers contend that their data on the relative effectiveness of their edits are incomplete and therefore unreliable. For example, NHIC California officials noted that they often lack good information on the ultimate outcome of reviews, taking account of reversals that occur when initial carrier decisions are appealed. Not only does the appeal process take a long time, if followed to its full extent, it can also be difficult to determine why certain claim denials were overturned. Another reason why carriers maintain low-performing prepayment edits is that there are few incentives—and some disincentives—for them to change. In particular, carriers have agreed with CMS to conduct a certain number of reviews that are evenly distributed throughout the course of the year. Before a carrier discontinues use of an edit, it must have another one in place that will garner at least as many claims for medical review to meet workload targets, or else negotiate a change in its medical review strategy with CMS officials to reallocate those review resources to other activities. Putting new edits in place often requires carriers to adjust the selection criteria over time in order to obtain the manageable number of claims selected for review. Carrier officials also noted that there is no systematic dissemination of carriers’ best practices—those worthy of consideration by all carriers— regarding the success of individual edits or methods to evaluate edit efficiency. An official at HealthNow NY told us that they informally share information about their experiences with particular prepayment edits with other carriers operating in the same region. Carrier officials reported that this is not common practice at WPS or NHIC California. In a 1996 report on selected prepayment edits, we recommended that HCFA, now CMS, disseminate information to carriers on highly productive edits. However, the agency currently does not identify and publicize in any systematic manner those edits that generate high denial rates or the selection criteria used to develop them. Since 1996, the overall level of payment errors for the Medicare program has been tracked nationwide in annual audit reports issued by the HHS Office of Inspector General (OIG). In the most recent audit, covering fiscal year 2001, the OIG found that $12.1 billion, or about 6.3 percent of the $191.8 billion in processed fee-for-service payments, was improperly paid to Medicare providers. These OIG reports of aggregate Medicare payment errors have spurred CMS to improve its efforts to safeguard Medicare payments by assessing not only an error rate nationwide but also for the individual carriers. In February 2000, HCFA announced the development of a new tool to assess individual carrier performance called the Comprehensive Error Rate Testing (CERT) Program. CERT is designed to measure, for all claims, the accuracy of payment decisions made by each carrier. The CERT benchmark will allow CMS to hold the carriers accountable for the accuracy of payment decisions for all claims processed, not just those selected for review. Thus, the results will reflect not only the carrier’s performance, but also the billing practices of the providers in their region. According to CMS officials, CERT information on all the carriers processing physician claims is expected to become available in November 2002. At that point, both CMS and the carriers can begin to use that information for program oversight and management, and will then see if the expectations for CERT are met in practice. Under the CERT program, CMS will use an independent contractor to select a random sample of approximately 200 claims for each carrier from among all those submitted each month for processing. For this sample, the carrier will provide the CERT contractor with information on the payment decisions made and all applicable medical documentation used in any medical reviews of the sample claims. The CERT contractor will request comparable documentation from physicians whose claims in the sample were not medically reviewed by the carrier. The CERT teams of clinician reviewers will examine the documentation and apply the applicable national and local medical policies to arrive at their own payment decisions for all of the sampled claims. With the development of carrier-level error rates, CMS expects to monitor payment accuracy trends for the individual carriers and focus its oversight on those carriers with relatively high, or worsening, rates of error. Moreover, on a national basis, CERT will calculate error rates for different provider types. For example, it will indicate how often physicians bill incorrectly and receive either too much or too little payment compared to such nonphysician providers as ambulance companies and clinical labs. The structure of subgroup analyses designed to help carriers better target their medical reviews remains open to discussion among CMS officials. CERT will complement but not replace CMS tracking systems designed to monitor carrier performance using data periodically reported to CMS by the carriers concerning medical review costs, the reduction in provider payments resulting from medical reviews, and workload. CMS has relied on these data to ensure that carriers sustain the level of effort specified in agreements with CMS—particularly the number of medical reviews conducted. CMS is currently working to consolidate and streamline these various reports into a Program Integrity Management Reporting (PIMR) system. CMS’s intention is for PIMR to collect, from each carrier, data such as the number of claims medically reviewed, the number denied, the number of denials reversed on appeal, and the associated dollar amounts saved or recouped. Currently, this information is not maintained in a common format and is difficult to compile. The first management reports based on PIMR are expected by the end of fiscal year 2002. In addition to CERT and the carrier-reported data, CMS oversight of physician medical review will continue to rely on contractor performance evaluations (CPEs)—assessments based on site visits conducted by a small team of CMS regional and headquarters staff. For carrier medical review activities, these CMS evaluations occur at irregular intervals, depending on the carrier’s volume of claims and the level of risk of finding substantial problems at the carrier. CMS’s evaluation emphasizes an assessment of the carrier’s compliance with Medicare rules and procedures in areas related to medical review—such as data analysis to support the selection of edits, the development of local coverage rules, and tracking contacts with physicians. The evaluation also involves examining a small number of claims to determine the accuracy of the carrier’s review decisions. Critics have previously alleged that CPE assessments lacked consistency and objectivity. In response, CMS has attempted to ensure greater uniformity across carriers in the way these evaluations are conducted by recruiting CPE team members from the agency as a whole, not the local regional office, and by using nationally based CPE protocols. While CMS has modified its medical review procedures, it is too soon to determine whether the PCA approach will enhance the agency’s efforts to perform its program integrity responsibilities. Carrier staff conduct medical reviews to maintain program surveillance and make physicians aware of any billing practices that are not in keeping with payment rules. In this regard, CMS’s PCA policy emphasizes feedback and educational contacts with individual physicians. Evaluating the efficacy of this policy will require a systematic examination of carriers’ performance data. When CERT data become available, CMS may be in a better position to assess PCA’s impact on reducing billing errors and preventing inappropriate payments. CMS officials reviewed a draft of this report and generally agreed with its findings. In particular, the agency noted that our discussion of the effectiveness of carrier edits confirmed the need for CMS to “become more active in assisting contractors in this area.” The agency also provided a number of technical corrections and clarifications that we incorporated into the text as appropriate. These comments are reprinted in Appendix III. We are sending copies of this report to the Administrator of CMS and we will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have questions about this report, please contact me at (312) 220-7600 or Rosamond Katz at (202) 512-7148. Other contributors to this report were Hannah Fein, Jenny Grover, Joel Hamilton, and Eric Peterson. We assessed the claims review accuracy of the three carriers in our study—National Heritage Insurance Company in California, Wisconsin Physicians Service Insurance Corp, and HealthNow NY—by validating initial medical review decisions involving physician claims. We contracted with DynCorp—the Medicare contractor already selected by CMS to administer its Comprehensive Error Rate Testing (CERT) program—to use the same review procedures developed for CERT in assessing a sample of medical review decisions made by the three carriers. We requested that each carrier identify the universe of physician claims subjected to prepayment and postpayment review during March 2001, limiting the universe to those claims submitted by M.D.s and D.O.s. From that universe, Dyncorp randomly selected 100 claims for review. Then, DynCorp obtained the medical record information for those claims from the carrier, and reviewed each payment decision for accuracy. The number of carrier decisions examined by DynCorp staff exceeded the number of claims because, in several instances, carriers had reviewed multiple lines on a claim. The results of this assessment of carrier medical review decisions can only be generalized to the universe of claims from which the samples were drawn: claims from M.D.s or D.O.s that underwent medical review in March 2001 by one of our three carriers. In reviewing payment accuracy, DynCorp staff was tasked with determining if the carrier’s initial review decision was supported by the medical record and carrier policies in place at the time the payment decision was made. Specifically, DynCorp assessed whether documentation in the medical records supported the procedure codes and level of service that was billed. Where their determination differed from that of the carrier, DynCorp staff discussed those claims with the carrier’s medical review staff. In all but one case, the parties came to agreement on whether payment decisions were accurate. In the one case where agreement could not be reached, the acting deputy director of CMS’s Program Integrity Group—a physician—provided a second opinion that confirmed the carrier’s decision. The results obtained from DynCorp’s review of physician claims undergoing medical review were consistent across the three carriers. The accuracy of decisions across all the sampled medical reviews for each carrier exceeded 94 percent. (See table 7.) In those cases where medical review errors were identified, NHIC California and WPS decisions resulted in a mix of underpayments and overpayments. However, HealthNow NY’s review errors were concentrated in decisions to pay claims in full that should have been denied or reduced. Because a relatively small proportion of medical reviews are conducted after claims payment, our samples from the three carriers included just 19 claims where a postpayment review was performed. The accuracy of carrier determinations for both prepayment and postpayment medical reviews was consistent, at about 95 percent. (See table 8.) Carriers attempt to collect any overpayments due the Medicare program as soon as possible after the completion of postpayment reviews. The carrier notifies physician practices that they have three options for returning an overpayment: (1) pay the entire overpayment amount within 30 days, (2) apply for an extended repayment plan, or (3) allow the carrier to offset the overpayment amount against future claims. Initially, the carrier sends a letter informing the physician practice of the medical review results and the specific dollar amount that the practice must return to Medicare. The letter provides an explanation of the procedures for repaying an overpayment, which includes a statement of Medicare’s right to recover overpayments and charge interest on debts not repaid within 30 days, as well as the practice’s right to request an extended repayment plan if the overpayment cannot be paid in that time. The letter also advises the physician practice of the right to submit a rebuttal statement prior to any recoupment by the carrier and to appeal the review decision to, in the first instance, the carrier’s separate appeals unit. In addition, the letter notifies the practice of any additional reviews that the carrier has planned. Regardless of whether the physician practice appeals the review decision, repayment is due within 30 days of the date of the letter, unless an extension is approved. Carriers will consider extended repayment plans for those physician practices that cannot make a lump sum payment by the due date. To qualify for an extension, the overpayment amount must be $1,000 or more and a practice must prove that returning an overpayment within the required time period would cause a financial hardship. Accordingly, a physician practice must offer specific documentation to support the request, including a financial statement with information on monthly income and expenses, investments, property owned, loans payable, and other assets and liabilities. In addition, if the requested repayment extension is for 12 months or longer, the physician practice must submit at least two letters from separate institutions indicating that they denied a loan request for the amount of the repayment. Requests for payment extensions that exceed 12 months must be referred to CMS regional staff for approval. If a physician practice does not return payment within 30 days or establish a repayment extension plan, the carrier must offset the amount owed against pending or future claims. The carrier has some discretion as to the exact date that offsetting begins, taking into consideration any statements or evidence from the physician practice as to the reasons why offsetting should not occur. In fiscal year 2001, HealthNow NY offset amounts owed by 72 of 95 physician practices that did not pay their overpayment amounts within 30 days. Most of the practices that did not have amounts offset returned their overpayments within 40 days. Any offset payments are applied against the accrued interest first, and then the principal. As shown in table 9, the three carriers in our study reported that most physician practices assessed an overpayment in fiscal year 2000 or 2001 repaid Medicare within 6 months of the carrier’s notice. The three carriers also reported few requests from physician practices for extended repayment plans. As shown in table 10, none of the carriers had more than four requests during fiscal year 2001, and no extension exceeded 1 year. Medicare: Communications With Physicians Can Be Improved (GAO-02-249, February 27, 2002). Medicare Management: CMS Faces Challenges to Sustain Progress and Address Weaknesses (GAO-01-817, July 31, 2001). Medicare Management: CMS Faces Challenges in Safeguarding Payments While Addressing Provider Needs (GAO-01-1014T, July 26, 2001). Regulatory Issues for Medicare Providers (GAO-01-802R, June 11, 2001).
In 1990, GAO designated the Medicare program to be at high-risk for waste, fraud, and abuse. More than a decade later, Medicare remains on GAO's high-risk list. This report examines Medicare's claims review process, which is designed to detect improper billing or payments. GAO found that most physicians who bill Medicare are largely unaffected by carriers' medical reviews, with 90 percent of physician claims going unreviewed in fiscal year 2001. At the three carriers GAO studied, implementation of the progressive corrective action initiative has reduced medical reviews of claims and has increased carrier education to individual physicians. The carriers in the study generally made appropriate payment determinations in examining physician claims selected for a medical review. By targeting claims that are more likely to have errors, carriers could improve the efficiency of their own operations and reduce administrative demands on the small proportion of physician practices with claims selected for review. The Centers for Medicare and Medicaid Services (CMS) is refocusing its oversight of carrier performance in processing and reviewing claims. The agency intends to hold carriers accountable for the overall level of payment errors in all the claims they process, not just the ones they review. Consistent with this approach, CMS is developing a program in which an independent contractor determines the accuracy of claims processed and paid by each carrier using quantitative performance measures.
PTSD can develop following exposure to life-threatening events, natural disasters, terrorist incidents, serious accidents, or violent personal assaults like rape. PTSD is the most prevalent mental disorder arising from combat. People who experience stressful events often relive the experience through nightmares and flashbacks, have difficulty sleeping, and feel detached or estranged. These symptoms may occur within the first 4 days after exposure to the stressful event or be delayed for months or years. Symptoms that appear within the first 4 days after exposure to a stressful event are generally diagnosed as acute stress reaction or combat stress. If the symptoms of acute stress reaction or combat stress continue for more than 1 month, PTSD is diagnosed. PTSD services are provided in VA medical facilities and VA community settings. VA medical facilities offer PTSD services as well as other services, which range from complex specialty care, such as cardiac or spinal cord injury, to primary care. VA’s community settings include more than 800 community-based outpatient clinics and 206 Vet Centers. Community- based outpatient clinics are an extension of VA’s medical facilities and mainly provide primary care services. Vet Centers offer PTSD and family counseling, employment services, and a range of social services to assist veterans in readjusting from wartime military service to civilian life. Vet Centers also function as community points of access for many returning veterans, providing them with information and referrals to VA medical facilities. Vet Centers were established as entities separate from VA medical facilities to serve Vietnam veterans, who were reluctant to access health care provided in a federal building. As a result, Vet Centers are not located on the campuses of VA medical facilities. VA has specialized PTSD programs that are staffed by clinicians who have concentrated their clinical work in the area of PTSD treatment. VA specialized PTSD programs are located in 97 VA medical facilities and provide services on an inpatient and outpatient basis. VA PTSD services include individual counseling, support groups, and drug therapy and can be provided in non-specialized clinics, such as general mental health clinics. Veterans who served in any conflict after November 11, 1998 are eligible for VA health care services for any illness, including PTSD services, for 2 years from the date of separation from military service, even if the condition is not determined to be attributable to military service. This 2-year eligibility includes those Reserve and National Guard members who have left active duty and returned to their units. After 2 years, these veterans will be subject to the same eligibility rules as other veterans, who generally have to prove that a medical problem is connected to their military service or have relatively low incomes. In July 2004, VA reported that so far 32,684 or 15 percent of veterans who have returned from service in Iraq or Afghanistan, including Reserve and National Guard members, have accessed VA for various health care needs. DOD and VA have formed a Seamless Transition Task Force with the goal of meeting the needs of servicemembers returning from Iraq and Afghanistan who will eventually become veterans and may seek health care from VA. To achieve this goal, DOD and VA plan to improve the sharing of information, including individual health information, between the two departments in order to enhance VA’s outreach efforts to identify and serve returning servicemembers, including Reserve and National Guard members, in need of VA health care services. Since April 2003, VA requires that every returning servicemember from the Iraq and Afghanistan conflicts who needs health care services receive priority consideration for VA health care appointments. DOD uses two approaches to identify servicemembers who may be at risk of developing PTSD: the combat stress control program and the post- deployment health assessment questionnaire. DOD’s combat stress control program identifies servicemembers at risk for PTSD by training all servicemembers to identify the early onset of combat stress, which if left untreated, could lead to PTSD. DOD uses the post-deployment health assessment questionnaire to screen servicemembers for physical ailments and mental health issues commonly associated with deployments, including PTSD. The questionnaire contains four screening questions that were developed jointly by DOD and VA mental health experts to identify servicemembers at risk for PTSD. DOD’s combat stress control program identifies servicemembers at risk for PTSD by training all servicemembers to identify the early onset of combat stress symptoms, which if left untreated, could lead to PTSD. The program is based on the principle of promptly identifying servicemembers with symptoms of combat stress in a combat theater, with the goal of treating and returning them to duty. This principle is consistent with the views of PTSD experts, who believe that early identification and treatment of combat stress symptoms may reduce the risk of PTSD. To assist servicemembers in the combat theater, teams of DOD mental health professionals travel to units to reinforce the servicemembers’ knowledge of combat stress symptoms and to help identify those who may be at risk for combat stress or PTSD. The teams may include psychiatrists, psychologists, social workers, nurses, mental health technicians, and chaplains. DOD requires that the effectiveness of the combat stress control program be monitored on an annual basis. DOD generally uses the post-deployment health assessment questionnaire, DD 2796, to identify servicemembers at risk for PTSD following deployment outside of the United States. (See app. II for a copy of the DD 2796.) DOD requires certain servicemembers deployed to locations outside of the United States to complete a DD 2796 within 30 days before leaving a deployment location or within 5 days after returning to the United States.This applies to all servicemembers returning from a combat theater, including Reserve and National Guard members. The DD 2796 is a questionnaire used to determine the presence of any physical ailments and mental health issues commonly associated with deployments, any special medications taken during deployment, and possible environmental or occupational exposures. The DD 2796 includes the following four screening questions that VA and DOD mental health experts developed to identify servicemembers at risk for PTSD: Have you ever had any experience that was so frightening, horrible, or upsetting that, in the past month, you have had any nightmares about it or thought about it when you did not want to? tried hard not to think about it or went out of your way to avoid situations that remind you of it? were constantly on guard, watchful, or easily startled? felt numb or detached from others, activities, or your surroundings? Once completed, the DD 2796 must be initially reviewed by a DOD health care provider, which could range from a physician to a medic or corpsman. Figure 1 illustrates DOD’s process for completion and review of the DD 2796. The form is then reviewed, completed, and signed by a health care provider, who can be a physician, physician assistant, nurse practitioner, or an independent duty medical technician or corpsman. This health care provider reviews the completed DD 2796 to identify any “yes” responses to the screening questions—including questions related to PTSD—that may indicate a need for further medical evaluation. The review is to take place in a face-to-face interview with the servicemember and be conducted either on an individual basis, as we observed at the Army’s Fort Lewis in Washington, or in a group setting, as we found at the Marine Corps’ Camp Lejeune in North Carolina. If a servicemember answers “yes” to a PTSD question, the health care provider is instructed to gather additional information from the servicemember and use clinical judgment to determine if the servicemember should be referred for further medical evaluation to a physician, physician’s assistant, nurse, or an independent duty medical technician.To document completion of the DD 2796, DOD requires that the questionnaire be placed in the servicemember’s permanent medical record and a copy sent to the Army Medical Surveillance Activity, which maintains a database of all servicemembers’ completed health assessment questionnaires. The National Defense Authorization Act for Fiscal Year 1998 required DOD to establish a quality assurance program to ensure, among other things, that post-deployment mental health assessments are completed for servicemembers who are deployed outside of the United States. Completion of the DD 2796 is tracked as part of this quality assurance program. DOD delegated responsibility for developing procedures for the required quality assurance program to each of its uniform services. The uniform services have given unit commanders the responsibility to ensure completion of the DD 2796 by all servicemembers under their command. To ensure the DD 2796 is completed, one DOD official we interviewed told us that servicemembers would not be granted leave to go home until the DD 2796 was completed. Another official told us that Reserve and National Guard members would not be given their active duty discharge paperwork until the DD 2796 was completed. VA does not have all the information it needs to determine whether it can meet an increase in demand for VA PTSD services. VA does not have a count of the total number of veterans currently receiving PTSD services at its medical facilities and Vet Centers. Without this information, VA cannot estimate the number of veterans its medical facilities and Vet Centers could treat for PTSD. VA could use demographic information it receives from DOD to broadly estimate the number of servicemembers who may access VA health care, including PTSD services. By assuming that 15 percent or more of returning servicemembers will develop PTSD, VA could use the demographic information to broadly estimate demand for PTSD services. However, predicting which veterans will seek VA care and at which facilities is inherently uncertain, particularly given that the symptoms of PTSD may not appear for years. VA does not have a count of the total number of veterans currently receiving PTSD services at its medical facilities and Vet Centers. Without this information, VA cannot estimate the number of additional veterans its facilities could treat for PTSD. On August 27, 2004, a Northeast Program Evaluation Center (NEPEC) official told us that a count of the total number of veterans with a diagnosis of PTSD who receive VA services at medical facilities could be obtained from VA’s existing database. However, this database does not include Vet Centers’ information because this information is kept separate from the medical facilities’ data. VA publishes two reports that contain information on some of the veterans receiving PTSD services at its medical facilities. Neither report includes all veterans receiving PTSD services at VA medical facilities and Vet Centers. VA’s annual capacity report, which is required by law, provides data on VA’s most vulnerable populations, such as veterans with spinal cord injuries, blind veterans, and seriously mentally ill veterans with PTSD. The NEPEC annual report mainly provides data on veterans with a primary diagnosis of PTSD. VA has not developed a methodology that would allow it to count the number of veterans receiving PTSD services at its medical facilities and Vet Centers. The PTSD data used in VA’s annual capacity report and the data used in NEPEC’s annual report are drawn from different—though not mutually exclusive—subgroups of veterans receiving PTSD services at VA’s medical facilities. VA developed criteria that allow it to determine which veterans should be included in each subgroup. VA’s criteria, which differ in each report, are based on the type and frequency of mental health services provided to veterans with PTSD at its medical facilities. (See Figure 2 for the veterans included in each of VA’s annual reports.) Veterans who are receiving VA PTSD services may be counted in both reports, only counted in the NEPEC report, or not included in either report. For example, a veteran who is seriously mentally ill and has a primary diagnosis of PTSD is counted in both reports. On the other hand, a veteran who has a primary diagnosis of PTSD but is not defined as seriously mentally ill is counted in the NEPEC report but not in the capacity report. Finally, a veteran who is receiving PTSD services only at a Vet Center is not counted in either report. Furthermore, both the VA OIG and VA’s Committee on Care of Veterans with Serious Mental Illness have found inaccuracies in the data used in VA’s annual capacity report.For example, OIG found inconsistencies in the PTSD program data reported by some VA medical facilities. OIG found that some medical facilities reported having active PTSD programs, although the facilities reported having no staff assigned to these programs. Additionally, the Committee on Care of Veterans with Serious Mental Illness, commenting on VA’s fiscal year 2002 capacity report, stated the data VA continues to use for reporting information on specialized programs are inaccurate and recommended changes in future reporting.22, 23 VA agreed with OIG that the data were inaccurate and is continuing to make changes to improve the accuracy of the data in its annual capacity report. VA’s fiscal year 2003 capacity report to Congress is currently undergoing review by OIG, which informed us that VA has not incorporated all of the changes necessary for OIG to certify that the report is accurate. OIG further stated that it will continue to oversee this process. The Committee on Care of Severely Chronically Mentally Ill Veterans assesses VA’s capability to meet the rehabilitation and treatment needs of such veterans. See 38 U.S.C. § 7321. The Committee, established within VA, is generally referred to as the Committee on Care of Veterans with Serious Mental Illness. Department of Veterans Affairs, Capacity Report Fiscal Year 2002 (Washington, D.C.: May 2003). predict the facilities or Vet Centers that could experience an increase in demand for care. By assuming that 15 percent or more of returning servicemembers will eventually develop PTSD, based on the predictions of mental health experts, VA could use the demographic information to broadly estimate the number of returning servicemembers who may need VA PTSD services and the VA facilities located closest to servicemembers’ homes. However, predicting which veterans will seek VA care and at which facilities is inherently uncertain, particularly given that the symptoms of PTSD may not appear for years. VA headquarters received demographic information from DOD in September 2003; however, during our review we found that VA had not shared this information with its facilities. On July 21, 2004, VA provided this information to its medical facilities for planning future services for veterans returning from the Iraq and Afghanistan conflicts. However, VA did not provide the demographic information to Vet Centers. Officials at seven VA medical facilities told us that while the demographic information VA receives from DOD has limitations, it is the best national data currently available and would help them plan for new veterans seeking VA PTSD services. Officials at six of the seven VA medical facilities we visited explained that while they are now able to keep up with the current number of veterans seeking PTSD services, they may not be able to meet an increase in demand for these services. In addition, some of the officials expressed concern about their ability to meet an increase in demand for VA PTSD services from servicemembers returning from Iraq and Afghanistan based on DOD’s demographic information. Officials are concerned because facilities have been directed by VA to give veterans of the Iraq and Afghanistan conflicts priority appointments for health care services, including PTSD service. As a result, VA medical facility officials estimate that follow-up appointments for veterans currently receiving care for PTSD may be delayed. VA officials estimate the delay may be up to 90 days. Veterans of the Iraq and Afghanistan conflicts will not be given priority appointments over veterans who have a service-connected disability and are currently receiving services. While the VA OIG continues to oversee VA’s efforts to improve the accuracy of data in the capacity reports, VA does not have a report that counts all veterans receiving VA PTSD services. Although VA can use DOD’s demographic information to broadly estimate demand for VA PTSD services, VA does not know the number of veterans it now treats for PTSD at its medical facilities and Vet Centers. As a result, VA will be unable to estimate its capacity for treating additional veterans who choose to seek VA’s PTSD services, and therefore, unable to plan for an increase in demand for these services. To help VA estimate the number of additional veterans it could treat for PTSD and to plan for the future demand for VA PTSD services from additional veterans seeking these services, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to determine the total number of veterans receiving VA PTSD services and provide facility-specific information to VA medical facilities and Vet Centers. In commenting on a draft of this report, VA concurred with our recommendation and acknowledged that more coordinated efforts are needed to improve its existing PTSD data. VA stated that it plans to aggregate, at the national level, the number of veterans receiving PTSD services at VA medical facilities and Vet Centers. We believe VA should provide these data to both its medical facilities and Vet Centers so they have the information needed to plan for future demand for PTSD services. In addition, VA provided two points of clarification. First, VA stated that it is in the process of developing a mental health strategic plan that will project demand by major diagnoses and identify where projected demand may exceed resource availability. VA stated that future revisions to the mental health strategic plan would include Vet Center data. Second, VA stated that it would seek additional information from DOD on servicemembers who have served in Iraq and Afghanistan to improve its provision of health care services to these new veterans. VA’s written comments are reprinted in appendix III. DOD concurred with the findings and conclusions in this report and provided technical comments, which we incorporated as appropriate. DOD’s written comments are reprinted in appendix IV. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its date. We will then send copies of this report to the Secretary of Veterans Affairs and other interested parties. We also will make copies available to others upon request. In addition, the report will be available at no charge at the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-7101. Another contact and key contributors are listed in appendix V. To determine the approaches DOD uses to identify servicemembers who are at risk for PTSD, we reviewed directives on screening servicemembers deployed to locations outside of the United States, interviewed DOD officials, and visited a military installation for each of DOD’s uniformed services. At each of the military installations, we discussed with officials the steps taken by each of the uniformed services to implement DOD’s approaches, particularly the steps involved in completing the post- deployment health assessment questionnaire, DD 2796, as it relates to PTSD. How well the uniformed services implemented DOD’s approaches were reported in other GAO reports. The uniformed services included in our review were Army, Marines, Air Force, and Navy. We did not include the Coast Guard in this review because few Coast Guard servicemembers are involved in the Iraq and Afghanistan conflicts. The military installations visited were: Fort Lewis Army Base and Madigan Army Medical Center in Washington, Seymour Johnson Air Force Base in North Carolina, Camp Lejeune Marine Base and the Naval Hospital Camp Lejeune in North Carolina, and the Naval Medical Center San Diego in California. We also asked DOD officials whether they provide information to VA that could help VA plan how to meet the demand for VA PTSD services from servicemembers returning from the Iraq and Afghanistan conflicts. To determine whether VA has the information it needs to determine whether it can meet an increase in demand for PTSD services, we interviewed PTSD experts from the National Center for PTSD established within VA and members of the Under Secretary for Health’s Special Committee on PTSD. We also visited three divisions of the National Center for PTSD: the Executive Division in White River Junction, Vermont; the Education Division in Palo Alto, California; and NEPEC in West Haven, Connecticut to review the Center’s reports on specialized PTSD programs. We also reviewed VA’s fiscal year 2001 and 2002 annual reports on VA’s capacity to provide services to special populations, including veterans with PTSD, and NEPEC’s annual reports on specialized PTSD programs to determine the criteria VA uses to count the number of veterans receiving VA PTSD services. We reviewed the findings of VA’s Committee on Care of Veterans with Serious Mental Illness and the VA OIG, who have reported on the accuracy of VA’s annual capacity report to Congress on the number of veterans receiving specialized services, including PTSD services. We interviewed officials from each of these groups to clarify their findings. We did not include data from the annual capacity reports because the OIG reported that the data were not sufficiently reliable. We also interviewed the director of NEPEC to discuss the information included in NEPEC’s annual reports. To determine whether VA facilities have the information needed to determine whether they can meet an increase in demand for PTSD services, we interviewed officials at 7 VA medical facilities, and 15 Vet Centers located near the medical facilities to discuss the number of veterans currently receiving VA PTSD services and the impact that an increase in demand would have on these services. We also discussed DOD’s demographic information with four of the seven medical facilities we visited. We contacted VA medical facilities located in Palo Alto and San Diego in California; Durham and Fayetteville in North Carolina; White River Junction, Vermont; West Haven, Connecticut; and Seattle, Washington. We also contacted Vet Centers located in Vista, San Diego, and San Jose in California; Raleigh, Charlotte, Greenville, Greensboro, and Fayetteville in North Carolina; South Burlington and White River Junction in Vermont; Hartford, Norwich, and New Haven in Connecticut; and Seattle and Tacoma in Washington. Our work was conducted from May through September 2004 in accordance with generally accepted government auditing standards. Authority: 10 U.S.C. 136 Chapter 55. 1074f, 3013, 5013, 8013 and E.O. 9397 Principal Purpose: To assess your state of health after deployment outside the United States in support of military operations and to assist military healthcare providers in identifying and providing present and future medical care to you. Routine Use: To other Federal and State agencies and civilian healthcare providers, as necessary, in order to provide necessary medical care and treatment. Disclosure: (Military personnal and DoD civilian Employees Only) Voluntary. If not provided, healthcare WILL BE furnished, but comprehensive care may not be possible. INSTRUCTIONS: Please read each question completely and carefully before marking your selections. Provide a response for each question. If you do not understand a question, ask the administrator. Today's Date (dd/mm/yyyy) DOB (dd/mm/yyyy) Date of arrival in theater (dd/mm/yyyy) Date of departure from theater (dd/mm/yyyy) Asia (Other) To what areas were you mainly deployed: (mark all that apply - list where/date arrived) Occupational specialty during this deployment (MOS, NEC or AFSC) 1. Did your health change during this deployment? 4. Did you receive any vaccinations just before or during this deployment? Smallpox (leaves a scar on the arm) 5. Did you take any of the following medications during this deployment? (mark all that apply) PB (pyridostigmine bromide) nerve agent pill Pills to stay awake, such as dexedrine 6. Do you have any of these symptoms now or did you develop them anytime during this deployment? 10. Are you currently interested in receiving help for a stress, emotional, alcohol or family problem? (mark all that apply) 11. Over the LAST 2 WEEKS, how often have you been bothered by any of the following problems? 8. Were you engaged in direct combat where you discharged your weapon? Little interest or pleasure in doing things air ) DD FORM 2796, APR 2003 12. Have you ever had any experience that was so frightening, horrible, or upsetting that, IN THE PAST MONTH, you .... 15. On how many days did you wear your MOPP over garments? Have had any nightmares about it or thought about it when you did not want to? Tried hard not to think about it or went out of your way to avoid situations that remind you of it? 16. How many times did you put on your gas mask because of alerts and NOT because of exercises? Were constantly on guard, watchful, or easily startled? Felt numb or detached from others, activities, or your surroundings? 17. Were you in or did you enter or closely inspect any destroyed military vehicles? 13. Are you having thoughts or concerns that ... 18. Do you think you were exposed to any chemical, biological, or radiological warfare agents during this deployment? You may have serious conflicts with your spouse, family members, or close friends? You might hurt or lose control with someone? (mark all that apply) DEET insect repellent applied to skin Environmental pesticides (like area fogging) Smoke from burning trash or feces Vehicle or truck exhaust fumes Fog oils (smoke screen) Depleted Uranium (If yes, explain) Date (dd/mm/yyyy) In addition to the contact named above Mary Ann Curran, Linda Diggs, Martha Fisher, Krister Friday, and Marion Slachta made key contributions to this report. Defense Health Care: DOD Needs to Improve Force Health Protection and Surveillance Processes. GAO-04-158T, Washington, D.C.: October 16, 2003. Defense Health Care: Quality Assurance Process Needed to Improve Force Health Protection and Surveillance. GAO-03-1041, Washington, D.C.: September 19, 2003. Disabled Veterans’ Care: Better Data and More Accountability Needed to Adequately Assess Care. GAO/HEHS-00-57, Washington, D.C.: April 21, 2000. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
Post-traumatic stress disorder (PTSD) is caused by an extremely stressful event and can develop after the threat of death or serious injury as in military combat. Experts predict that about 15 percent of servicemembers serving in Iraq and Afghanistan will develop PTSD. Efforts by VA to inform new veterans, including Reserve and National Guard members, about the expanded availability of VA health care services could result in an increased demand for VA PTSD services. GAO identified the approaches DOD uses to identify servicemembers at risk for PTSD and examined if VA has the information it needs to determine whether it can meet an increase in demand for PTSD services. GAO visited military bases and VA facilities, reviewed relevant documents, and interviewed DOD and VA officials to determine how DOD identifies servicemembers at risk for PTSD, and what information VA has to estimate demand for VA PTSD services. DOD uses two approaches to identify servicemembers at risk for PTSD: the combat stress control program and the post-deployment health assessment questionnaire. The combat stress control program trains servicemembers to recognize the early onset of combat stress, which can lead to PTSD. Symptoms of combat stress and PTSD include insomnia, nightmares, and difficulties coping with relationships. To assist servicemembers in the combat theater, teams of DOD mental health professionals travel to units to reinforce the servicemembers' knowledge of combat stress symptoms and to help identify those who may be at risk for combat stress and PTSD. DOD also uses the post-deployment health assessment questionnaire to identify physical ailments and mental health issues commonly associated with deployments, including PTSD. The questionnaire includes the following four screening questions that VA and DOD mental health experts developed to identify servicemembers at risk for PTSD: Have you ever had any experience that was so frightening, horrible, or upsetting that, in the past month, you (1) have had any nightmares about it or thought about it when you did not want to; (2) tried hard not to think about it or went out of your way to avoid situations that remind you of it; (3) were constantly on guard, watchful, or easily startled; and/or (4) felt numb or detached from others, activities, or your surroundings? VA lacks the information it needs to determine whether it can meet an increase in demand for VA PTSD services. VA does not have a count of the total number of veterans currently receiving PTSD services at its medical facilities and Vet Centers--community-based VA facilities that offer trauma and readjustment counseling. Without this information, VA cannot estimate the number of new veterans its medical facilities and Vet Centers could treat for PTSD. VA has two reports on the number of veterans it currently treats, with each report counting different subsets of veterans receiving PTSD services. Veterans who are receiving VA PTSD services may be counted in both reports, one of the reports, or not included in either report. VA does receive demographic information from DOD, which includes home addresses of servicemembers that could help VA predict which medical facilities or Vet Centers servicemembers may access for health care. By assuming that 15 percent or more of servicemembers who have left active duty status will develop PTSD, VA could use the home zip codes of servicemembers to broadly estimate the number of servicemembers who may need VA PTSD services and identify the VA facilities located closest to their homes. However, predicting which veterans will seek VA care and at which facilities is inherently uncertain, particularly given that the symptoms of PTSD may not appear for years.
Immunizations are widely considered one of the leading public health achievements of the 20th century. Mandatory immunization programs have eradicated polio and smallpox in the United States and reduced the number of deaths from several childhood diseases, such as measles, to near zero. A consistent supply of many different vaccines is needed to support this effort. CDC currently recommends routine immunizations against 11 childhood diseases: diphtheria, tetanus, pertussis (whooping cough), Haemophilus influenzae type b (most commonly meningitis), hepatitis B, measles, mumps, rubella (German measles), invasive pneumococcal disease, polio, and varicella (chicken pox). By combining antigens (the component of a vaccine that triggers an immune response), a single injection of a combination vaccine can protect against multiple diseases. The federal government, primarily through agencies of the Department of Health and Human Services (HHS), has a role both as a purchaser of vaccines and as a regulator of the industry. The federal government is the largest purchaser of vaccines in the country. CDC negotiates large purchase contracts with manufacturers and makes the vaccines available to public immunization programs under the Vaccines for Children (VFC) program. Under VFC, vaccines are provided for certain children, including those who are eligible for Medicaid or uninsured. Participating public and private health care providers obtain vaccines through VFC at no charge. A second program, established under section 317, of the Public Health Service Act, provides project grants for preventive health services, including immunizations. Currently, CDC supports 64 state, local, and territorial immunization programs (for simplicity, we refer to them as state immunization programs). In total, about 50 percent of all the childhood vaccines administered in the United States each year are obtained by public immunization programs through CDC contracts. The federal government is also responsible for ensuring the safety of the nation’s vaccine supply. FDA regulates the production of vaccines. It licenses all vaccines sold in the United States, requiring clinical trials to demonstrate that vaccines are safe and effective, and reviews the manufacturing process to ensure that vaccines are made consistently in compliance with current good manufacturing practices. Once vaccines are licensed, FDA also conducts periodic inspections of production facilities to ensure that manufacturers maintain compliance with FDA manufacturing requirements. States also have an important role in immunization efforts. Policies for immunization requirements, including minimum school and day care entry requirements are made almost exclusively at the state level, although cities occasionally impose additional requirements. Each state also established an immunization infrastructure to monitor infectious disease outbreaks, administer federal immunization grants, manage centralized supplies of vaccine, and otherwise promote immunization policies. Recent vaccine shortages have necessitated temporary modifications to the recommended immunization schedule and have caused states to scale back immunization requirements. In our survey of 64 state immunization programs, administered through the Association for State and Territorial Health Officials (ASTHO), all 52 responding programs indicated that they had experienced shortages of two or more vaccines and had taken some form of action to deal with the shortages. Vaccine shortages experienced at the state level have, in turn, prompted cutbacks in immunization requirements for admission to day care or school. Thirty-five states reported putting into effect new, less stringent immunization requirements that allow children who have received fewer than the recommended number of vaccinations to attend school. In general, these states have reduced the immunization requirements for day care and/or school entry or have temporarily suspended enforcement of those requirements until vaccine supplies are replenished. For example, the Minnesota Department of Health suspended the school and postsecondary immunization laws for Td vaccine for the second year in a row, with the suspension extending through the 2002-2003 school year. Other states, including South Carolina and Washington, reported allowing children to attend day care or school even if they were not in compliance with immunization requirements, under the condition that they be recalled for vaccinations when supplies became available. While it is too early to measure the effect of deferred vaccinations on immunization rates, a number of states reported that vaccine shortages and missed make-up vaccinations may take a toll on coverage and, therefore, increase the potential for infectious disease outbreaks. The full impact of vaccine shortages is difficult to measure for several reasons. For example, none of the national immunization coverage surveys measures vaccination coverage of children under the age of 18 months—the age cohort receiving the majority of vaccinations. While immunization experts generally agree that the residual effects of historically high immunization rates afford temporary protection for underimmunized children, missed immunizations could make susceptible children vulnerable to disease outbreaks. For example, a CDC analysis of a 1998 outbreak of measles in an Anchorage, Alaska, school showed that only 51 percent of the 2,186 children exposed had received the requisite two doses of measles vaccine. No single reason explains the rash of recent vaccine shortages; rather, multiple factors coincided that affected both the supply of and demand for vaccines. We identified four key factors, as follows. Production Problems - Manufacturing production problems contributed to the shortage of certain vaccines. In some cases, production slowdowns or interruptions occurred when planned maintenance activities took longer than expected; in other cases, production was affected as manufacturers addressed problems identified in FDA inspections. Changes over the last several years in FDA inspection practices may have resulted in the identification of more or different instances of manufacturers’ noncompliance with FDA manufacturing requirements. For example, prior to these changes, biologics inspections tended to focus primarily on scientific or technical issues and less on compliance with good manufacturing practices and documentation issues. FDA did take some steps to inform manufacturers about its inspection program changes; however, some manufacturers reported problems related to how well the changes were communicated. FDA issued a compliance program guidance manual detailing the new protocol for conducting inspections intended for FDA staff. However, the information in it could have provided manufacturers a better understanding of the scope of the inspections, but the manual was not made widely available—only upon request. Removal of Thimerosal - Calls for the removal of the preservative thimerosal from childhood vaccines illustrate the effect that policy changes can have on the supply of vaccine. As a precautionary measure, in July 1999, the American Academy of Pediatrics (AAP) and the U.S. Public Health Service (PHS) issued a joint statement advising that thimerosal in vaccines be eliminated or reduced as soon as possible. While thimerosal was present in several vaccines, removing it from some vaccines was more complex than for others. For example, one manufacturer of the diphtheria- tetanus-acellular pertussis vaccine (DTaP) had to switch its packaging from multidose to single-dose vials due to the removal of the preservative. This process reduced the manufacturer’s output of vaccine by 25 percent, according to the manufacturer. Manufacturer’s Decision to Discontinue Production - Another major factor in the shortage of DTaP, and also Td, was the decision of one manufacturer to discontinue production of all products containing tetanus toxoid. With little advance warning, the company announced in January 2001 that it had ceased production of these vaccines. According to the manufacturer, prior to its decision, it produced approximately one-quarter of all Td and 25 to 30 percent of all DTaP distributed in the United States, so the company’s departure from these markets was significant. In the previous year, another manufacturer that supplied a relatively small portion of DTaP also had stopped producing this vaccine. Together these decisions decreased the number of major manufacturers of DTaP from four to two and of Td from two to one. Unanticipated Demand - The addition of new vaccines to the recommended immunization schedule can also result in shortages if the demand for vaccine outstrips the predicted need and production levels. This was the case with a newly licensed vaccine, pneumococcal conjugate vaccine (PCV), which protects against invasive pneumococcal diseases in young children. PCV was licensed by FDA in February 2000 and formally added to the recommended schedule in January 2001. Company officials said an extensive education campaign prior to its availability resulted in record-breaking initial demand for the vaccine. CDC reported shortages of PCV existed through most of 2001, and the manufacturer was only able to provide about half the needed doses during the first 5 months of 2002. Ongoing manufacturing problems limit production, exacerbating the shortage. While the recent shortages have been largely resolved, the vaccine supply remains vulnerable to any number of disruptions that could occur in the future—including those that contributed to recent shortages and other potential problems, such as a catastrophic plant fire. One key reason is that the nature of vaccine manufacturing prevents the quick production of more vaccine when disruptions occur. Manufacturing a vaccine is a complex, highly controlled process, involving living biological organisms, that can take several months to over a year. Another underlying problem is the limited number of manufacturers—five of the eight recommended childhood vaccines have only one major manufacturer each. Consequently, if there are interruptions in supply or if a manufacturer ceases production, there may be few or no alternative sources of vaccine. One situation that may help add to the supply of existing vaccines is the development of new vaccines. A recent example is a new formulation of DTaP that recently received FDA approval and has helped ease the shortage of DTaP. We identified 11 vaccines in development that could help meet the current recommended immunization schedule. These vaccines, some of which are already licensed for use in other countries, are in various stages of development, but all must undergo a rather lengthy process of clinical testing and FDA review. While FDA has mechanisms available to shorten the review process, they are not used for most vaccines under development. FDA policies generally restrict the use of its expedited review processes to vaccines that offer protection against diseases for which there are no existing vaccines. Because childhood vaccines under development often involve new forms or combinations of existing vaccines, they typically do not qualify for expedited FDA review. Federal efforts to strengthen the nation’s vaccine supply have taken on greater urgency with the recent incidents of shortages. As part of its mandate to study and recommend ways to encourage the availability of safe and effective vaccines, the National Vaccine Advisory Committee formed a work group to explore the issues surrounding vaccine shortages and identify strategies for further consideration by HHS. In its preliminary report, the work group identified several strategies that hold promise, such as streamlining the regulatory process, providing financial incentives for vaccine development, and strengthening manufacturers’ liability protection, but it concluded that these strategies needed further study. The work group did express support for expanding CDC vaccine stockpiles In response to the work group’s finding that streamlining the regulatory process needed further study, FDA recently announced that it is examining regulations governing manufacturing processes for both drugs and vaccine products to determine if reform is needed. However, FDA officials told us it is too early to define the scope and time frame for this reexamination. Regarding financial incentives for vaccine development, the Institute of Medicine is currently conducting a study of vaccine pricing and financing strategies that may address this issue. In regard to liability protections, the work group did make recommendations to strengthen the Vaccine Injury Compensation Program (VICP). VICP is a federal program authorized in 1986 to reduce vaccine manufacturers’ liability by compensating individuals for childhood-vaccine-related injuries from a VICP trust fund. The program was established, in part, to help stem the exodus of manufacturers from the vaccine business due to liability concerns. Manufacturers, however, reported a recent resurgence of childhood-vaccine-related lawsuits— including class action lawsuits related to past use of thimerosal—that allege that the lawsuits are not subject to VICP. While the work group acknowledged that recent vaccine shortages do not appear to be related to VICP liability issues, it indicated that strengthening VICP would encourage manufacturers to enter, or remain in, the vaccine production business. Legislation has been introduced for the purpose of clarifying and modifying VICP. Also consistent with the work group’s recommendations, CDC is considering whether additional vaccine stockpiles will help stabilize the nation’s vaccine supply. In 1993, with the establishment of the VFC program, CDC was required to purchase sufficient quantities of pediatric vaccines not only to meet normal usage, but also to provide an additional 6-month supply to meet unanticipated needs. Further, to ensure funding, CDC was authorized to make such purchases in advance of appropriations. Despite this requirement, to date, CDC has established partial stockpiles for only two—measles-mumps-rubella (MMR) and inactivated polio vaccine (IPV)—of the eight recommended childhood vaccines. Even if CDC decides to stockpile additional vaccines, the limited supply and manufacturing capacity will restrict CDC’s ability to build certain stockpiles in the near term. CDC estimates it could take 4 to 5 years to build stockpiles for all the currently recommended childhood vaccines—at a cost of $705 million. Past experience also demonstrates the difficulty of rapidly building stockpiles. Neither the current IPV nor MMR stockpiles have ever achieved target levels because of limited manufacturing capacity. In addition to these challenges, CDC will also need to address issues regarding its authority, strategy, and information needed to use stockpiled vaccines. Authority - It is uncertain whether stockpiled vaccines purchased with VFC funds can be used for non-VFC-eligible children. While the 1993 legislation required the Secretary of HHS to negotiate for a 6-month stockpile of vaccines to meet unanticipated needs, the legislation did not state that the supply of stockpiled vaccines may be made available for children not otherwise eligible through the VFC program. CDC officials said that the VFC legislation is unclear as to whether stockpiled vaccines can be used for all children. Strategy - Expanding the number of CDC vaccine stockpiles will require a substantial planning effort—an effort that is not yet complete. For example, CDC has not made key decisions about vaccine stockpiles to ensure their ready release, including the quantity of each vaccine to stockpile, the form of storage, and storage locations. Also, to ensure that use of a stockpile does not disrupt supply to other purchasers, procedures would need to be developed to ensure that stockpiles represent additional quantities to a manufacturer’s normal inventory levels.
Vaccine shortages began to appear in November 2000, when supplies of the tetanus and diptheria booster fell short. By October 2001, the Centers for Disease Control and Prevention (CDC) reported shortages of five vaccines that protect against eight childhood diseases. In addition to diptheria and tetanus vaccines, vaccines to protect against pertussis, invasive pneumococcal disease, measles, mumps, rubella, and varicella were in short supply. In July 2002, updated CDC data indicated supplies were returning to normal for most vaccines. However, the shortage of vaccine to protect against invasive pneumococcal disease was expected to continue through at least late 2002. Shortages have prompted federal authorities to recommend deferring some vaccinations and have caused most states to reduce or suspend immunization requirements for school and day care programs so that children who have not received all mandatory immunizations can enroll. States are concerned that failure to be vaccinated at a later date may reduce the share of the population protected and increase the potential for disease to spread; however, data are not currently available to measure these effects. Many factors, including production problems and unanticipated demand for new vaccines, contributed to recent shortages. Although problems leading to the shortages have largely been resolved, the potential exists for shortages to recur. Federal agencies and advisory committees are exploring ways to help stabilize the nation's vaccine supply, but few long-term solutions have emerged. Although CDC is considering expanding vaccine stockpiles to provide a cushion in the event of a supply disruption, limited supply and manufacturing capacity will restrict CDC's ability to build them.
The PMA initiative to improve financial performance is aimed at ensuring that federal financial systems produce accurate and timely information to support operating, budget, and policy decisions. It focuses on key issues such as data reliability, clean financial statement audit opinions, and effective internal control and financial management systems. Our work in these areas over a number of years demonstrates the importance of the improvement efforts that are underway. The Congress enacted a number of statutory reforms during the 1990s in the area of financial management. Although progress has been made under the PMA, the federal government is a long way from successfully implementing these reforms. Reliable information, including cost data, is critical for effective performance measurement to support program management decisions in areas ranging from program efficiency and effectiveness to sourcing and contract management. For effective management, this information must not only be timely and reliable, but also both useful and used. Under this PMA initiative, agencies are expected to implement integrated financial and performance management systems that routinely produce information that is (1) timely—to measure and affect performance immediately, (2) useful—to make more informed operational and investing decisions, and (3) reliable—to ensure consistent and comparable trend analysis over time and to facilitate better performance measurement and decision making. Producing timely, useful, and reliable information is critical for achieving the goals that the Congress established in the Chief Financial Officers (CFO) Act of 1990 and other federal financial management reform legislation. The executive branch management scorecard for the financial performance area not only recognizes the importance of achieving an unqualified or “clean” opinion from auditors on financial statements, but also focuses on the fundamental and systemic issues that must be addressed in order to routinely generate timely, accurate, and useful financial information and provide sound internal control and effective compliance systems, which represents the end goal of the CFO Act. For fiscal year 2004, OMB accelerated agencies’ financial statement reporting date to November 15, 2004, as compared with January 30, 2004, for fiscal year 2003. Twenty-two of twenty-three CFO Act agencies were able to issue their fiscal year 2004 financial statements by the accelerated reporting date, a significant improvement in the timeliness of these statements. Eighteen of these agencies were able to attain unqualified audit opinions on their financial statements. At the same time, the growing number of CFO Act agencies that restated certain of their financial statements for fiscal year 2003 to correct errors emerged as an issue of concern that merits close scrutiny. Eleven of the twenty-three CFO Act agencies fell into this category in fiscal year 2004, as compared with at least five CFO Act agencies that had restatements of prior year financial statements in fiscal year 2003. Frequent restatements to correct errors can undermine public trust and confidence in both the entity and all responsible parties. The scorecard also measures whether agencies have any material internal control weaknesses or material noncompliance with laws and regulations, and whether agencies meet Federal Financial Management Improvement Act (FFMIA) of 1996 requirements. As stated in the PMA, without sound internal controls and accurate and timely financial information, it will not be possible to accomplish the President’s agenda to secure the best performance and highest measure of accountability for the American people. Reinforcing the PMA’s emphasis on effective internal controls, OMB revised Circular A-123, Management’s Responsibility for Internal Control in December, 2004. These revisions recognize that effective internal control is critical to improving federal agencies’ effectiveness and accountability and to achieving the goals that the Congress established in 1950 and reaffirmed in 1982 with passage of the Federal Managers’ Financial Integrity Act (FMFIA). The Circular correctly recognizes that instead of considering internal control as an isolated management tool, agencies should integrate their efforts to meet the FMFIA requirements with other efforts to improve effectiveness and accountability. Internal control should be an integral part of the entire cycle of planning, budgeting, management, accounting, and auditing. It should support the effectiveness and the integrity of every step of the process and provide continual feedback to management. We support OMB’s efforts to revitalize FMFIA, particularly the principles- based approach in the revised Circular A-123 for establishing and reporting on internal control that should increase accountability. This approach provides a floor for expected behavior, rather than a ceiling, and by its nature, calls for greater judgment on the part of those applying the principles. Accordingly, clear articulation of objectives, the criteria for measuring whether the objectives have been successfully achieved and the rigor with which these criteria are applied will be critical. Providing agencies with supplemental guidance and implementation tools, which OMB and the CFO Council are developing, is particularly important in light of the varying levels of maturity in internal control across government as well as the divergence in implementation of a principles-based approach that is typically found across entities with varying capabilities. A challenge of great complexity that many agencies face is ensuring that underlying financial management processes, procedures, and information systems are in place for effective program management. Agencies need to take steps to (1) continuously improve internal controls and underlying financial and management information systems to ensure that managers and other decision makers have reliable, timely, and useful financial information to ensure accountability; (2) measure, control, and manage costs; (3) manage for results; and (4) make timely and fully informed decisions about allocating limited resources. Meeting FFMIA requirements presents long-standing, significant challenges that will only be met through time, investment, and sustained emphasis on correcting deficiencies in federal financial management systems. The widespread systems problems facing the federal government need sustained management commitment at the highest levels of government to ensure that these needed modernizations come to fruition. PMA provides the visibility needed for sustaining these efforts. Much work remains to be done across government to improve financial performance, as shown by the December 2004 scorecards. Of the 23 CFO Act agencies that OMB scored, 15 were rated red for financial performance. This is not surprising, considering the well-recognized need to transform financial management and other business processes at agencies such as the Department of Defense (DOD), the results of our analyses under FFMIA, the various financial management operations we have designated as high risk, and known long-standing material weaknesses. Seven agencies improved their scores to green from the initial baseline evaluation for financial performance which was as of September 30, 2001; however, several agencies’ scores declined, reflecting increased challenges. Overhauling financial management represents a challenge that goes far beyond financial accounting to the very fiber of an agency’s business operations and management culture, particularly at agencies with longstanding problems, such as DOD. For the new Department of Homeland Security (DHS), establishing sound financial management is a critical success factor. In the area of financial performance, the federal government is a long way from successfully implementing needed financial management reforms. Widespread financial management system weaknesses, poor recordkeeping and documentation, weak internal controls, and the lack of information have prevented the federal government from having the cost information it needs to effectively and efficiently manage operations through measuring the full cost and financial performance of programs and accurately reporting a large portion of its assets, liabilities, and costs. The government’s ability to adequately safeguard significant assets has been impaired by these conditions. Across government, there is a range of financial management improvement initiatives under way that, if effectively implemented, will improve the quality of the government’s financial management and reporting. Federal agencies have started to make progress in their efforts to modernize their financial management systems and improve financial management performance as called for in PMA. However, until these challenges are adequately addressed, they will continue to present a number of adverse implications for the federal government and the taxpayers. At the same time, the need for timely, reliable, and useful financial and performance information is greater than ever. Our nation’s large and growing long-term fiscal imbalance, which is driven largely by known demographic trends and rising health care costs, coupled with new homeland security and defense commitments and the recent downward trend in revenue as a share of gross domestic product, serves to sharpen the need to fundamentally review and re-examine the base of federal entitlement, discretionary, and other spending and tax policies. Clearly, tough choices will be required to address the resulting structural imbalance. Improper payments are a longstanding, widespread, and significant problem in the federal government. The Congress enacted the Improper Payments Information Act (IPIA) of 2002 to address this issue of improper payments. The separate improper payments PMA program initiative began in the first quarter of fiscal year 2005. Previously, agency efforts related to improper payments were tracked along with other financial management activities as part of the Improved Financial Performance initiative. The objective of establishing a separate initiative for improper payments was to ensure that agency managers are held accountable for meeting the goals of the IPIA and are therefore dedicating the necessary attention and resources to meeting IPIA requirements. Across the federal government, improper payments occur in a variety of programs and activities, including those related to health care, contract management, federal financial assistance, and tax refunds. Improper payments include inadvertent errors, such as duplicate payments and miscalculations, payments for unsupported or inadequately supported claims, payments for services not rendered, payments to ineligible beneficiaries, and payments resulting from fraud and abuse by program participants and/or federal employees. Many improper payments occur in federal programs that are administered by entities other than the federal government, such as states, municipalities, and intermediaries such as insurance companies. Generally, improper payments result from a lack of or an inadequate system of internal control, but some result from program design issues. Federal agencies’ estimates of improper payments based on available information for fiscal year 2004 exceeded $45 billion. This estimate could increase significantly over the next several years as agencies become more effective at estimating and reporting improper payment amounts for programs and activities that are susceptible to significant improper payments. Of the 15 agencies identified for this PMA initiative, no agencies were rated green and 10 were rated red in the first scores for this initiative as of December 31, 2004. These results are consistent with our previous work both agencywide and in specific program areas. For example, our preliminary reviews of 29 federal agencies’ fiscal year 2004 PARs suggest that a number of agencies were not well positioned to meet the reporting requirements of IPIA. Additionally, improper payments for specific programs have been identified as a high-risk area. For example, the Centers for Medicare & Medicaid Services has made improvements in assessing the level of improper payments, collecting overpayments from providers, and building the foundation for modernizing its information technology. Nevertheless, much work remains to be done given the magnitude of its challenges in safeguarding program payments. This includes more effectively overseeing Medicare’s claims administration contractors, managing the agency’s information technology initiatives, and strengthening financial management processes across multiple contractors and agency units. In light of these challenges and the program’s size and fiscal significance, Medicare remains on our list of high-risk programs. For Medicaid, an estimate of improper payments was not reported for fiscal year 2004. Our prior work has demonstrated that attacking improper payments requires a strategy appropriate to the organization and its particular risks. We have found that entities using successful strategies to help address their improper payments shared a common focus of improving the internal control system—the first line of defense in safeguarding assets and preventing and detecting errors and fraud. As discussed in the Comptroller General’s Standards for Internal Control in the Federal Government, the components of any control system are: control environment—creating a culture of accountability, risk assessment—performing analyses of program operations to determine if risks exist, control activities—taking actions to address identified risk areas, information and communications—using and sharing relevant, reliable, and timely information, and monitoring—tracking improvement initiatives and identifying additional actions needed to further improve program efficiency and effectiveness. Effective implementation of the IPIA will be an important step towards addressing the longstanding, significant issue of improper payments. OMB has an important role, and we support their efforts to call attention to this issue. Fiscal year 2004 represents the first year that federal agencies were required to report improper payment information required by the IPIA in their Performance and Accountability Reports (PAR). IPIA raised improper payments to a new level of importance by requiring federal agencies to annually review all programs and activities and identify those that may be susceptible to significant improper payments. Federal agencies are required to estimate the annual amount of improper payments for those programs and activities identified as susceptible to significant improper payments. The law further requires federal agencies to report to the Congress the improper payment estimates and information on the actions the agency is taking to reduce the improper payments. OMB implementation guidance required that estimates and, if applicable, the corrective action report, be included in federal agencies’ PARs beginning with fiscal year 2004. OMB’s guidance addresses the specific reporting requirements called for in the act and lays out the general steps agencies are to perform to meet those requirements. The guidance defines key terms used in the law, such as programs and activities, and offers criterion that clarify the meaning of the term significant improper payments. It requires that agencies use statistical sampling when estimating improper payments and sets statistical sampling confidence and precision levels for estimation purposes. It also requires that agencies report the results of their improper payment activities in their annual PAR. The ultimate success of the legislation and the PMA initiative hinges on each agency’s diligence and commitment in identifying, estimating, determining the causes of, taking corrective actions, and measuring progress in reducing all improper payments. Designating this area as a separate program initiative under the PMA, will bring visibility to this problem that we hope will lead to action and further progress. The PMA recognizes that people are an important organizational asset to an agency. Under the PMA, agencies are to implement a comprehensive human capital plan that aligns with agency mission and goals. Considerable progress has been made in strategic human capital management since we designated it as high risk in 2001. For example, OMB recently reported that agencies are making improvements in addressing key human capital challenges. Nevertheless, ample opportunities exist for agencies to improve their strategic human capital management to achieve results and respond to current and emerging challenges. Specifically, agencies continue to face challenges in four key areas: Leadership: Agencies need sustained leadership to provide the focused attention essential to completing multiyear transformations. Strategic Human Capital Planning: Agencies need effective strategic workforce plans to identify and focus their human capital investments on the long-term issues that best contribute to results. Acquiring, Developing, and Retaining Talent: Agencies need to continue to create effective hiring processes and use flexibilities and incentives to retain critical talent and reshape their workforces. Results-Oriented Organizational Cultures: Agencies need to reform their performance management systems so that pay and awards are linked to performance and organizational results. Going forward, federal agencies need to develop and effectively implement the human capital approaches that best meet their needs, resources, context, and authorities. While these approaches will depend on each organization’s specific situation, leading public sector organizations build an infrastructure that at a minimum, includes (1) a human capital planning process that integrates the agency’s human capital policies, strategies, and programs with its program goals, mission, and desired outcomes; (2) the capabilities to effectively develop and implement a hew human capital system; and importantly, (3) the existence of a modern, effective, and credible performance management system that includes adequate safeguards (such as reviews and appeal processes) to ensure fair, effective, non-discriminatory, and credible implementation of the new system. Our observations follow. Conducting strategic human capital planning: Such planning aligns human capital programs with programmatic goals and develops strategies to acquire, develop, and retain staff to achieve these goals. As part of the PMA, agencies are to implement a workforce planning system to identify and address gaps in mission critical occupations and competencies and develop succession strategies. Agencies are experiencing significant challenges to deploying the right skills, in the right places, at the right time in the wake of extensive downsizing during the early 1990s that was done largely without sufficient consideration of the strategic consequences. Agencies are also facing a growing number of employees who are eligible for retirement and are finding it difficult to fill certain mission-critical jobs, a situation that could significantly drain agencies’ institutional knowledge. For example, the achievement of DOD’s mission is dependent in large part on the skills and expertise of its civilian workforce. We recently reported that DOD’s future strategic workforce plans may not result in workforces that possess the critical skills and competencies needed. Among other things, DOD and the components do not know what competencies their staff needs to do their work now and in the future and what type of recruitment, retention, and training and professional development workforce strategies should be developed and implemented to meet future organizational goals. It is questionable whether DOD’s implementation of its new personnel reforms will result in the maximum effectiveness and value. Building the capability to develop and implement human capital systems: An essential element to acquiring, developing, and retaining a high-quality workforce is effective use of human capital flexibilities. These flexibilities represent the policies and practices that an agency has the authority to implement in managing its workforce. As part of the PMA, agencies are to establish goals to accelerate their hiring processes, monitor their progress, and implement needed improvements. We reported that agencies must take greater responsibility for maximizing the efficiency and effectiveness of their individual hiring processes within the current statutory and regulatory framework that Congress and the Office of Personnel Management (OPM) have provided and recommended that OPM take additional actions to assist agencies in strengthening the federal hiring process. We subsequently reported that although Congress, OPM, and agencies have all undertaken efforts to help improve the federal hiring process, agencies appeared to be making limited use of the new hiring flexibilities provided by Congress in 2002—category rating and direct hire. Consistent with our findings and recommendations, OPM has taken a number of important actions to assist agencies in their use of hiring flexibilities. For example, OPM issued final regulations on the use of category rating and direct-hire authority, providing some clarification in response to various comments it had received in interim regulation. Also, OPM conducted a training symposium to provide federal agencies with further instruction and information on ways to improve the quality and speed of the hiring process. Implementing modern, effective, and credible performance management systems: Effective performance management systems can help drive internal change and achieve external results. Such systems are not merely used for expectation setting and rating processes, but are also used to facilitate two-way communication so that discussions about individual and organizational performance are integrated and ongoing. Leading public sector organizations have created a clear linkage—”line of sight”— between individual performance and organizational success. Under the PMA, agencies are to establish performance appraisal plans for all senior executives and managers that link to agency mission, goals, and outcomes. Recently, Congress and the administration have sought to modernize senior executive performance management systems by establishing a new performance-based pay system for the Senior Executive Service (SES) that is designed to provide a clear and direct linkage between SES performance and pay. With the new system, an agency can raise the pay cap for its senior executives if OPM certifies and OMB concurs that the agency’s performance management system, as designed and applied, makes meaningful distinctions based on relative performance. However, data suggest that more work is needed in making such distinctions. Agencies rated about 75 percent of senior executives at the highest level their systems permit in fiscal year 2003, the most current year for which data are available, which is about the same percent of executives as fiscal year 2002. Congress has recently given agencies such NASA, DHS, and DOD statutory authorities to help them manage their human capital strategically to achieve results. Consequently, in this environment, the federal government is quickly approaching the point where “standard governmentwide” human capital policies and processes are neither standard nor governmentwide. To be effective, human capital reform needs to avoid further fragmentation within the civil service, ensure reasonable consistency within the overall civilian workforce, and help maintain a reasonably level playing field among federal agencies competing for talent. To help advance the discussion concerning how governmentwide human capital reform should proceed, GAO and the National Commission on the Public Service Implementation Initiative hosted a forum on whether there should be a governmentwide framework for human capital reform and, if so, what this framework should include. While there were divergent views among the forum participants, there was general agreement on a set of principles, criteria, and processes that would serve as a starting point for further discussion in developing a governmentwide framework in advancing needed human capital reform, as shown in figure 1. There is general recognition for a need to continue to develop a governmentwide framework for human capital reform that Congress and the administration can implement to enhance performance, ensure accountability, and position the nation for the future. Nevertheless, how it is done, when it is done, and on what basis it is done can make all the difference. Agencies authorized to implement any statutory authority should demonstrate that they have the capacity, not just the design, to do so. The principles, criteria, and processes suggested above can help ensure consistency when granting both (1) agency-specific human capital authorities so agencies can design and implement effective human capital systems to help them address 21st century challenges and succeed in their transformations and (2) governmentwide reform to provide broad consistency where desirable and appropriate. The current administration has taken several steps to strengthen the integration of budget, cost, and performance information for which the Government Performance and Results Act (GPRA), the CFO Act, and the Government Management Reform Act (GMRA) laid the groundwork. The budget and performance integration initiative includes elements such as the Program Assessment Rating Tool (PART) used to review programs, an emphasis on improving outcome measures, and improving monitoring of program performance. Another effort is budget restructuring, which is meant to improve the alignment of resources with performance. None of these efforts are simple or straightforward. Integrating management and performance issues with budgeting is absolutely critical for progress in government performance and management. Such integration is obviously important to ensuring that management initiatives obtain the resource commitments and sustained leadership commitment throughout government needed to be successful. GPRA was enacted to provide a greater focus on performance in the federal government with the expectation that this would be linked and integrated with the budget. GPRA has succeeded in 10 years in expanding the supply of information and institutionalizing a culture of performance. In 2002, OMB introduced a formal assessment tool into executive branch budget deliberation: PART is the central element in the performance budgeting piece of the PMA. GPRA expanded the supply of performance information generated by federal agencies. OMB’s PART builds on GPRA by actively promoting the use of results-oriented information to assess programs in the budget. It has the potential to promote a more explicit discussion and debate between OMB, the agencies, and the Congress about the performance of selected programs. The promise of performance budgeting is that it can help shift the focus of budgetary debates and oversight activities by changing the agenda of questions asked. Performance information can help policymakers address a number of questions such as whether programs are (1) contributing to their stated goals, (2) well-coordinated with related initiatives at the federal level or elsewhere, and (3) targeted to those most in need of services or benefits. Results-oriented information is also needed for better day-to-day management and agency decisionmaking. It can provide information on what outcomes are being achieved, whether resource investments have benefits that exceed their costs, and whether program managers have the requisite capacities to achieve promised results. PART reviews are directed towards answering many of these questions; in many cases these reviews illustrated how far we have to go before performance information can be used with complete confidence. While no data are perfect, agencies need to have sufficiently credible performance data to provide transparency of government operations so that Congress, program managers, and other decision makers can use the information. However, as our work on PART and GPRA implementation shows, limited confidence in the credibility of performance data has been a longstanding weakness. Credible performance information can facilitate a fundamental reassessment of what the government does and how it does business by focusing on the outcomes—or program results—achieved with budgetary resources. Our work has shown that agencies are making progress, but improvement is needed to ensure that agencies measure performance toward a comprehensive set of goals that focus on results. We have previously reported that stakeholder involvement appears critical for getting consensus on goals and measures. Although improving outcome measures continues to be a major focus of PART reviews, as we reported in our January 2004 report, these assessments are conducted during the executive branch budget formulation process. An agency’s communication with stakeholders, including Congress, about goals and measures created or modified during the formulation of the President’s budget is likely to be less than during the development of the agency’s own strategic or performance plan. Moreover, in order for performance information to more fully inform resource allocations, decision makers must also feel comfortable with the appropriateness and accuracy of the performance information and measures associated with these goals. It is unlikely that decision makers will use performance information unless they believe it is credible and reliable and reflects a consensus about performance goals among a community of interested parties. Similarly, the measures used to demonstrate progress toward a goal, no matter how worthwhile, cannot serve the interests of a single stakeholder or purpose without potentially discouraging use of this information by others. Regarding OMB’s budget restructuring effort, this represents more than structural or technical changes. It reflects important trade-offs among different and valid perspectives and needs of these different decision makers. The structure of appropriations accounts and congressional budget justifications reflects fundamental choices and incentives considered most important. As such, changes to the account structure have the potential to change the nature of management and oversight and ultimately the relationship among the primary budget decision makers— Congress, OMB, and agencies. This suggests that the goal of enhancing the use of performance information in budgeting is a multifaceted challenge that must build on a foundation of accepted goals, credible measures, reliable cost and performance data, tested models linking resources to outcomes, and performance management systems that hold agencies and managers accountable for performance. Understanding performance issues requires an in-depth evaluation of the factors contributing to the program results. Targeted evaluation studies can be designed to detect important program side effects or to assess the comparative advantages of current programs to alternative strategies for achieving a program’s goals. Further, although the evaluation of programs in isolation may be revealing, it is often critical to understand how each program fits with a broader portfolio of tools and strategies to accomplish federal missions and performance goals. Such an analysis is necessary to capture whether a program complements and supports other related programs, whether it is duplicative and redundant, or whether it actually works at cross-purposes with other initiatives. Although the administration has taken some steps to use PART for crosscutting reviews, this falls short of the more expansive planning and review process called for in GPRA. Although clearly much more remains to be done, the statutory reforms of the 1990s have laid the foundation for performance budgeting by establishing infrastructures in the agencies to improve the supply of information on performance and costs. Merely the number of programs “killed” or a measurement of funding changes against performance “grades” cannot measure the success of performance budgeting. Rather, success must be measured in terms of the quality of the discussion, the transparency of the information, the meaningfulness of that information to key stakeholders, and how it is used in the decision-making process. The determination of priorities is a function of competing values and interests that may be informed by performance information but also reflects such factors as equity, unmet needs, and the perceived appropriate role of the federal government in addressing these needs. If members of Congress and the executive branch have better information about the link between resources and results, they can make the trade-offs and choices cognizant of the many and often competing claims on the federal budget. Electronic government, or e-government, has been seen as promising a wide range of benefits based largely on harnessing the power of the Internet to facilitate interconnections and information exchange between citizens and their government. Federal agencies have implemented a wide array of e-government applications, including using the Internet to collect and disseminate information and forms; buy and pay for goods and services; submit bids and proposals; and apply for licenses, grants, and benefits. Although substantial progress has been made, the government continues to face challenges in fully reaching its potential in this area. Recognizing the magnitude of challenges facing the federal government, Congress has enacted important legislation to guide the development of e- government. Specifically, in December 2002, Congress enacted the E- Government Act of 2002 with the general purpose of promoting better use of the Internet and other information technologies to improve government services for citizens, internal government operations, and opportunities for citizen participation in government. Among other things, the act required the establishment of an Office of Electronic Government within OMB to oversee implementation of the act’s provisions. The act also mandated additional actions to strengthen e-government activities in a number of specific areas, including accessibility and usability of government information, protection of personal privacy, coordination of information related to disaster response and recovery, and common protocols for geographic information systems. Additionally, title III of the act includes provisions to strengthen agency information security, known as the Federal Information Security Management Act of 2002. To implement the PMA initiative, OMB has taken a number of actions. The centerpiece of the effort has been oversight of 25 high-profile e- government projects covering a wide spectrum of government activities, ranging from the establishment of centralized portals on government information to eliminating redundant, nonintegrated business operations and systems. For example, Grants.gov is a Web portal for all federal grant customers to find, apply for, and ultimately manage federal grants online. Other e-government efforts, such as the e-payroll initiative to consolidate federal payroll systems, do not necessarily rely on the Internet. The results of these e-government initiatives, according to OMB, could produce several billion dollars in savings from improved operational efficiency. More recently, OMB has initiated efforts to develop common business- driven, government-wide solutions in five e-government “lines of business”: case management, federal health architecture, grants management, human resources management, and financial management. These efforts are also expected to reap cost savings and gains in efficiency. While many e-government initiatives are showing tangible results, we found, in March 2004, that overall progress on the 25 OMB-sponsored e- government initiatives was mixed. At that time we reported that, of the 91 objectives originally defined in the initiatives’ work plans, 33 had been fully or substantially achieved; 38 had been partially achieved; and for 17, no significant progress had been made. In addition, three of the objectives were no longer being pursued, because they had been found to be impractical or inappropriate. We found that the extent to which the 25 initiatives had met their original objectives could be linked to a common set of challenges that they all faced, including (1) focusing on achievable objectives that address customer needs, (2) maintaining management stability through executive commitment, (3) collaborating effectively with partner agencies and stakeholders, (4) driving transformational changes in business processes, and (5) implementing effective funding strategies. Initiatives that had overcome these challenges generally met with success in achieving their objectives, whereas initiatives that had problems dealing with these challenges made less progress. Additionally, as we reported in December 2004, in most cases, OMB and federal agencies have taken positive steps toward implementing major provisions of the E-Government Act of 2002. For example, OMB established the Office of E-Government in April 2003, and published guidance to federal agencies on implementing the act in August 2003. Apart from general requirements applicable to all agencies (which we did not review), we found that in most cases, OMB and designated federal agencies had taken action to address the act’s requirements within stipulated timeframes. To help ensure that the act’s objectives are achieved, we made recommendations to OMB regarding implementation of the act in the areas of e-government approaches to crisis preparedness, contractor innovation, and federally funded research and development. OMB’s PMA scorecard for the expanded electronic government initiative reflects a broad view of the many components of an effective program for expanding electronic government. For example, the scorecard assesses whether an agency has an enterprise architecture in place that is linked to the Federal Enterprise Architecture, which is intended to provide a government wide framework to guide and constrain federal agencies’ enterprise architectures and information technology investments. The federal government’s efforts in this area are still maturing. In May 2004, we reported that the Federal Enterprise Architecture remained very much a work in progress and that agencies’ enterprise architectures were likewise still maturing. When we surveyed agencies in 2003, we found that only 20 of 96 agencies had established at least the foundation for effective architecture management and that the level of maturity had not changed much over the previous years. In addition, OMB’s e-government scorecard requires agencies to properly secure their information technology systems, a task that has been daunting for many government agencies. We recently reported that although agencies were generally reporting an increasing number of systems meeting key statutory information security requirements, challenges nevertheless remained. For example, only 7 of 24 agencies reported that they had tested contingency plans for 90 percent or more of their systems. Contingency plans provide specific instructions for restoring critical systems in case the usual facilities are significantly damaged or cannot be accessed due to unexpected events, and testing of these plans is essential to determining whether they will function as intended in an emergency situation. The federal government needs to undertake a fundamental review of who will do the government’s business in the 21st Century. In this regard, agencies are assessing what functions and transactions the private sector could perform, and in many cases they are asking agency employees to compete with private entities for this business. The objectives of the PMA initiative on competitive sourcing are to improve quality and reduce costs. Aspects of the government’s process for making sourcing decisions had been criticized as cumbersome, complicated, and slow. Against this backdrop, and in response to a requirement in the National Defense Authorization Act for fiscal year 2001, I convened a panel of experts to study the process. The Commercial Activities Panel, consisting of representatives from agencies, federal labor unions, private industry, and other individuals with expertise in this area, conducted a yearlong study. The panel members heard repeatedly about the importance of competition and its central role in fostering economy, efficiency, and continuous performance improvement. The panel strongly supported continued emphasis on competition and concluded that whenever the government is considering converting work from one sector to another, public-private competitions should be the norm, consistent with the 10 overarching principles adopted unanimously by the panel. As part of the administration’s efforts to advance this PMA initiative and implement the recommendations of the Commercial Activities Panel, OMB revised circular A-76, which sets forth federal policy for determining whether federal employees or private contractors will perform commercial activities. The revisions are broadly consistent with the principles and recommendations of the Panel. In particular, the revised circular stresses the use of competition in making sourcing decisions and, through reliance on procedures contained in the Federal Acquisition Regulation, should result in a more transparent, expeditious, fair, and consistently applied competitive process. We continue to review various aspects of this initiative. One issue not fully addressed in the revised Circular was the right of federal employees or their representatives to file protests challenging the conduct or the outcomes of public-private competitions. In April 2004, we issued a decision holding that federal employees lacked standing to file such protests under the Competition in Contracting Act (CICA). We pointed out that the Congress would have to amend CICA in order to provide that right. Congress amended CICA late last year, and just last week, after receiving and considering various public comments, we issued final regulations implementing the change. The federal real property portfolio is vast and diverse—over 30 agencies control hundreds of thousands of real property assets worldwide, including facilities and land worth hundreds of billions of dollars. Unfortunately, much of this vast, valuable portfolio reflects an infrastructure based on the business model and technological environment of the 1950s. Many of these assets are no longer effectively aligned with, or responsive to, agencies’ changing missions. Further, many assets are in an alarming state of deterioration; agencies have estimated restoration and repair needs to be in the tens of billions of dollars. Maintaining underused or unneeded federal property is also costly due to day-to-day operational costs, such as regular maintenance, utilities fees, and security expenses. Compounding these problems are the lack of reliable governmentwide data for strategic asset management; a heavy reliance on costly leasing, instead of ownership, to meet new needs; and the cost and challenge of protecting these assets against terrorism. In January 2003, we designated federal real property as a high-risk area due to these longstanding problems. In February 2004, the President added the Federal Asset Management Initiative to the President’s Management Agenda and signed Executive Order 13327 to address challenges in this area. The order requires senior real property officers at all executive branch departments and agencies to, among other things, develop and implement an agency asset management plan; identify and categorize all real property owned, leased, or otherwise managed by the agency; prioritize actions needed to improve the operational and financial management of the agency’s real property inventory; and make life-cycle cost estimations associated with the prioritized actions. In addition, the senior real property officers are responsible, on an ongoing basis, for monitoring the real property assets of the agency. The order also established a new Federal Real Property Council (the Council) at OMB. In April 2005, OMB officials updated us on the status of the implementation of the executive order. According to these officials, all of the senior real property officers are in place, and the Council has been working to identify common data elements and performance measures to be captured by agencies and ultimately reported to a governmentwide database. In addition, OMB officials reported that agencies are working on their asset management plans. Plans for DOD, the Departments of Veterans Affairs, (VA) and Energy, and the General Services Administration (GSA) have been completed and approved by OMB. The Council has also developed guiding principles for real property asset management. These guiding principles state that real property asset management must, among other things, support agency missions and strategic goals, use public and commercial benchmarks and best practices, employ life-cycle cost-benefit analysis, promote full and appropriate utilization, and dispose of unneeded assets. In addition to these reform efforts, Public Law 108-447 gave GSA the authority to retain the net proceeds from the disposal of federal property for fiscal year 2005 and to use such proceeds for GSA’s real property capital needs. Also, Public Law 108-422 established a capital asset fund and gave VA the authority to retain the proceeds from the disposal of its real property for the use of certain capital asset needs such as demolition, environmental clean-up, and major repairs. And, agencies such as DOD and VA have made progress in addressing longstanding federal real property problems and governmentwide efforts in the facility protection area are progressing. For example: VA has implemented a process called Capital Asset Realignment for Enhanced Services (CARES) to address its aging and obsolete portfolio of health care facilities. In March 2005, we reported that through CARES, VA identified 136 locations for evaluation of alternative ways to align inpatient services: 99 of these facilities had potential duplication of services with another nearby facility or low acute patient workload. VA made decisions to realign inpatient health care services at 30 of these locations. For example, it will close all inpatient services at five facilities. VA’s decisions on inpatient alignment and plans for further study of its capital asset needs are tangible steps in improving management of its capital assets and enhancing health care. Accomplishing its goals, however, will depend on VA’s success in completing its evaluations and implementing its CARES decisions to ensure that resources now spent on unneeded capital assets are redirected to health care. In DOD’s support infrastructure management area, which we identified as high-risk in 1997, DOD has made progress and expects to continue making improvements. In April 2005, we testified that DOD’s infrastructure costs continue to consume a larger-than-necessary portion of its budget than DOD believes is desirable. For several years, DOD has been concerned about its excess facilities infrastructure, which affects its ability to fund weapons system modernization and other critical needs. DOD has achieved some operating efficiencies from such efforts as base realignments and closures, consolidations, and business process reengineering. Despite this progress, much work remains for DOD to transform its support infrastructure so that it can concentrate resources on critical needs. DOD also needs to strengthen its recent efforts to develop and refine its comprehensive long-range plan for its facility infrastructure to ensure adequate funding for facility sustainment, modernization, and recapitalization. In light of the need to invest in facility protection since September 11, funding available for repair and restoration and preparing excess property for disposal may be further constrained. The Interagency Security Committee (ISC), which is chaired by DHS, is tasked with coordinating federal agencies’ facility protection efforts, developing standards, and overseeing implementation. In November 2004, we reported that ISC had made progress in coordinating the government’s facility protection efforts by, for example, developing security standards for leased space and design criteria for security in new construction projects. Despite this progress, we found that its actions to ensure compliance with security standards and oversee implementation have been limited. Nonetheless, the ISC serves as a forum for addressing security issues, which can have an impact on agencies’ efforts to improve real property management. The inclusion of real property asset management on the President’s Management Agenda, the executive order, and agencies’ actions are clearly positive steps in an area that had been neglected for many years. However, despite the increased focus on real property issues in recent years, the underlying conditions—such as excess and deteriorating properties and costly leasing—continue to exist and more needs to be done to address various obstacles that led to our high risk designation. For example, the problems have been exacerbated by competing stakeholder interests in real property decisions, various legal and budget related disincentives to businesslike outcomes, and the need for better capital planning among real property-holding agencies. In light of this, we continue to believe that there is a need for a comprehensive and integrated transformation strategy for federal real property. Realigning the government’s real property assets with agency missions, taking into account the requirements of the future federal role and workplace, will be critical to improving the government’s performance and ensuring accountability within expected resource limits. A transformation strategy could serve as a useful guide for implementing further change and achieving such results. As my testimony today has highlighted, serious and disciplined efforts are needed to improve the management and performance of federal agencies and to ensure accountability. Along with OMB’s leadership in implementing PMA, it will only be through the attention of Congress, the administration, and federal agencies, that progress can be sustained and, more importantly, accelerated. The stakes associated with federal program performance are large, both for beneficiaries of these programs and the nation’s taxpayers. Policymaking institutions will be challenged to shift from the traditional focus on incremental changes in spending or revenues to look more fundamentally at the programs, policies, functions, and activities in addressing current and emerging national needs and problems across levels of government and sectors, including all major areas of the federal budget—discretionary spending, entitlements and other mandatory spending, and tax policies and programs. Congressional support has proven to be critical in sustaining interest in management initiatives over time. Congress has served as an institutional champion for many reform initiatives over the years, such as the CFO Act and GPRA. Our March 2004 report on GPRA found that it has established a solid foundation for achieving greater results, but that significant challenges to GPRA implementation still exist. Our survey data suggested that more federal managers, especially at the SES level, believed that OMB was paying attention to their agencies’ efforts under GPRA. However, we found inconsistent commitment in other areas where OMB could further enhance its leadership. Agencies’ plans and reports still suffer from persistent weaknesses and could improve in a number of areas, such as attention to issues that cut across agency lines, and better information about the quality of the data that underlie agency performance goals. We recommended that OMB improve its guidance and oversight of GPRA implementation, as well as develop a governmentwide performance plan. As discussed earlier, GPRA requires a governmentwide performance plan, but OMB has not issued a distinct plan since 1999. Most recently, the President’s fiscal year 2006 budget described agencies’ progress in addressing the PMA and the results of PART reviews of agencies’ programs. While such information is important and useful, alone it is not adequate to provide a broader and more integrated perspective of planned performance on governmentwide outcomes. The PART focus on individual programs needs to be supplemented by a more crosscutting assessment of the relative contribution of portfolios of programs and tools to broader outcomes. Most key performance goals of importance—ranging from low income housing to food safety to counterterrorism—are addressed by a wide range of discretionary, entitlement, tax, and regulatory approaches that cut across a number of agencies. Preparing a governmentwide plan could build on the administration’s efforts to assess progress across the government as well as contribute to efforts to compare the performance results across similar programs that address common outcomes. Although there has been limited progress, efforts to date have not provided the Congress and others with an integrated perspective on the extent to which programs and tools contribute to national goals and position the government to successfully meet 21st century demands. We also suggested that Congress consider amending GPRA to require that the President develop a governmentwide strategic plan. Although it generally agreed with our recommendations, OMB stated that the President’s Budget can serve as both a governmentwide strategic and annual plan. However, we believe that the budget provides neither a long- term nor an integrated perspective on the federal government’s performance. A strategic plan for the federal government, supported by a set of key national indicators to assess the government’s performance, position, and progress, could provide an additional tool for governmentwide reexamination of existing programs, as well as proposals for new programs. Such a plan could be of particular value in linking agencies’ long-term performance goals and objectives horizontally across the government and could provide a basis for integrating, rather than merely coordinating, a wide array of federal activities. This raises the issue of the need for a set of key indicators to inform decision makers about the position and progress of the nation as a whole and to help set agency and program goals and priorities. Further, given the financial constraints we are likely to face for many years to come and the trends at work that are changing the world in which our government operates, a fundamental review of major program and policy areas is needed to update the federal government’s programs and priorities to meet current and future challenges. Our recent report on 21st Century Challenges is intended to help the Congress in reviewing and reconsidering the base of federal spending and tax programs. As this Subcommittee is well aware, the nature and magnitude of the fiscal, security, and economic and other adjustments that need to be considered are not amenable to “quick fixes;” rather they will likely require an iterative, thoughtful process of disciplined changes and reforms over many years. Therefore, providing an ongoing and consistent focus, such as the PMA has provided on management reform efforts, is an important element in helping to ensure that the federal government is managed effectively to achieve results important to the American people. Our report on 21st century challenges laid out some of the most pressing issues for policymakers to consider as the government increasingly relies on new networks and partnerships to achieve critical results. A complex network of governmental and nongovernmental entities—such as federal agencies, domestic and international non- or quasi-governmental organizations, for-profit and not-for-profit contractors, and state and local governments—contribute to shaping the actual outcomes achieved. Some of the issues are consistent with those raised by the PMA, such as in the area of real property asset management—focusing on opportunities to more strategically manage the federal government’s assets to make the federal portfolio more relevant to current missions and less costly. Moving forward, some additional questions that are particularly relevant to the focus of this hearing on improving governance include the following: In a modern society with advanced telecommunications and electronic information capabilities, which agencies still need a physical presence in all major cities? How can agencies more strategically manage their portfolio of tools and adopt more innovative methods to contribute to the achievement of national outcomes? How can greater coordination and dialogue be achieved across all levels of government to ensure a concerted effort by the public sector as a whole in addressing key national challenges and problems? What are the specific leadership models that can be used to improve agency management and address transformation challenges? For example, should we create chief operating officer or chief management officer positions with term appointments within selected agencies to elevate, integrate, and institutionalize responsibility and authority for business management and transformation efforts? Mr. Chairman, we are pleased to be able to participate in this hearing today. We have issued a large body of reports, guides, and tools on issues directly relevant to PMA, and plan to continue to actively support congressional and agency actions to address today’s challenges and prepare for the future. As I have discussed in my statement today, although efforts to transform agencies by improving their management and performance are under way, more remains to be done to ensure that the government has the capacity to deliver on its promises meet current and emerging needs, and to remain relevant in the 21st Century. Decisive action and sustained attention will be necessary to make the hard choices needed to reexamine and transform the federal government, maximize its performance, and ensure accountability. This concludes my prepared statement. I would be pleased to respond to any questions that you or other members of the subcommittee may have. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
As part of its work to improve the management and performance of the federal government, GAO monitors progress and continuing challenges related to the President's Management Agenda (PMA). The Administration has looked to GAO's high-risk program to help shape various governmentwide initiatives, including the PMA. GAO remains committed to working with the Congress and the Administration to help address these important and complex issues. The administration's implementation of the PMA has been a very positive initiative. It has served to raise the visibility of key management challenges, increased attention to achieving outcome-based results, and reinforced the need for agencies to focus on making sustained improvements in addressing long-standing management problems, including items on GAO's high-risk list. Our work shows that agencies have made progress in the areas covered by the PMA, and the Office of Management and Budget (OMB) has indicated it will continue to focus on high-risk areas during the President's second term. Importantly, OMB needs to place additional attention on the Department of Defense's (DOD) many high-risk areas and overall business transformation efforts. While considerable progress has been made in connection with PMA issues, a number of significant challenges remain. In the area of financial performance, the PMA recognizes the importance of timely, accurate and useful financial information and sound internal control. Agencies made significant progress in meeting accelerated financial statement reporting deadlines, and OMB has refocused attention on improving internal controls. However, agencies face several challenges--improvement lags on financial management reforms, especially at DOD which must overhaul its financial management and business operations. The PMA established a separate initiative for improper payments to ensure that agency managers are held accountable for meeting the goals of the Improper Payments Information Act of 2002. Effective implementation of this Act will be an important step toward addressing this area, which involves tens of billions of dollars. The PMA recognizes that people are an important organizational asset. A governmentwide framework for advancing human capital reform is needed to avoid further fragmentation within the civil service, ensure management flexibility as appropriate, allow a reasonable degree of consistency, provide adequate safeguards within the overall civilian workforce, and help maintain a level playing field among federal agencies competing for talent. The initiative to integrate management and performance issues with budgeting is critical for progress in government performance and management. OMB's Program Assessment Rating Tool (PART) is designed to use results-oriented information to assess programs in the budget formulation process. However, more should be done to assess how each program fits within the broad portfolio of tools and strategies used to accomplish federal missions. Many e-government initiatives are showing tangible results. However, the government continues to face challenges, such as establishing a federal enterprise architecture intended to provide a framework to guide agencies' enterprise architectures and investments. The inclusion of real property asset management on the PMA, an executive order, and agencies' actions are all positive steps in an area that had been neglected for years. However, the underlying conditions--such as excess and deteriorating properties--continue to exist. More needs to be done in areas such as improving capital planning among agencies.
CCC was originally incorporated in 1933 under a Delaware charter and was reincorporated in 1948 as a federal corporation within USDA by the Commodity Credit Corporation Charter Act (P.L. 80-806, June 29, 1948). Although CCC operates under a large number of statutory directives and limitations, its broad powers under the CCC Charter Act authorize it to carry out almost any operation required to meet its objectives. The principal operations that CCC funds are the income and commodity support programs. CCC also funds commodity export, resource conservation, and disaster assistance programs. CCC’s programs—including its income and commodity support, resource conservation, and disaster assistance programs and most of its commodity export programs—are classified as mandatory spending programs, and therefore CCC does not require annual appropriations in order to make outlays for them. Instead, CCC borrows funds from the Department of the Treasury to finance these programs. CCC may have outstanding borrowing of up to $30 billion at any one time. In contrast, several of CCC’s commodity export programs—the export credit guarantee programs and the Food for Peace Program—are financed primarily through direct annual appropriations in addition to other funding. CCC’s nonrecoverable losses are reimbursed through an annual appropriation. In fiscal year 1996, the appropriation included funds to cover the actual and estimated nonrecoverable losses from prior fiscal years as well as an advance on estimated future nonrecoverable losses. In fiscal year 1997, the appropriation included funds to cover actual losses from fiscal year 1996 only. In addition, CCC collects program receipts from its commodity programs—mainly commodity loan repayments and the proceeds from the sale of commodities held in inventory by CCC. Together, these appropriations and program receipts allow CCC to repay, with interest, its debt to the Treasury and to replenish its borrowing authority. Appendix I shows CCC’s flow of funds. A board of directors oversees CCC’s operations, subject to the supervision and direction of the Secretary of Agriculture, who is the ex officio chairperson of the board. The members of the board and the Corporation’s officers are all USDA officials. Over time, the direct role of the board in overseeing the Corporation’s operations has diminished; as of December 1997, the board had met only twice in the past 2 years. In general, the Corporation’s officers and their designees manage the Corporation’s business affairs. Appendix II lists CCC’s board of directors and officers. CCC has no employees—the programs it funds are carried out primarily through the personnel and facilities of several USDA agencies. For example, FSA administers all of CCC’s income and commodity support and disaster assistance programs and two of its resource conservation programs. FSA also handles the budgeting and accounting for all CCC programs. In addition, FAS administers CCC’s commodity export programs, and NRCS administers most of CCC’s resource conservation programs. The Corporation may also use the services of other government entities to help administer its programs. During fiscal years 1996 and 1997, CCC used its borrowing authority to finance most of its programs and related operations; only a few of its programs were financed through direct appropriations and other funding sources. Of the net outlays made with borrowing authority funds—about $5 billion in fiscal year 1996 and $7.5 billion in fiscal year 1997—most were for the income and commodity support programs. The remainder financed commodity export, resource conservation, and disaster assistance programs as well as administrative expenses. In addition to the net outlays made through its borrowing authority, CCC had net outlays for programs and activities that receive direct appropriations and/or other funding—principally several of its commodity export programs. Most of CCC’s funds in fiscal years 1996 and 1997 derived from its borrowing authority. This authority, limited by law to $30 billion in outstanding borrowing at any one time, fluctuated as loans were made from and repaid to the Department of the Treasury throughout the year. CCC replenished its borrowing authority through (1) annual appropriations—about $10.5 billion in fiscal year 1996 and $1.5 billion in fiscal year 1997—and (2) program receipts amounting to about $6.9 billion in fiscal year 1996 and $5.7 billion in fiscal year 1997. Several of CCC’s commodity export programs—the export credit guarantee programs and Food for Peace Program—received direct appropriations and other funding that totaled about $2.1 billion and $1.9 billion, in fiscal years 1996 and 1997, respectively. The appropriations provided for these programs were unrelated to the borrowing authority. In each of fiscal years 1996 and 1997, CCC was also authorized to use about $3 million in funds from USDA’s appropriation for hazardous waste management; CCC used the funds for cleanup initiatives for its commodity storage facilities. In addition, in fiscal year 1997, the Food for Peace Program returned to CCC about $25 million in unobligated funds. CCC’s net outlays (expenditures that take into account offsetting receipts) made through its borrowing authority totaled about $5 billion in fiscal year 1996 and about $7.5 billion in fiscal year 1997. Most of these outlays were for income and commodity support programs—about $4.4 billion and $5.1 billion, respectively, for that period. The remaining outlays were for CCC’s commodity export (excluding programs directly appropriated), resource conservation, and disaster assistance programs; and administrative expenses. Figures 1 and 2 depict the relative share of net outlays made with CCC borrowing authority funds in fiscal years 1996 and 1997, respectively. As discussed, CCC’s net outlays for its income and commodity support programs were about $4.4 billion and $5.1 billion in fiscal years 1996 and 1997, respectively. In general, these programs assist producers through loans, purchases, payments, and other operations; they also make available the materials and facilities required to produce and market agricultural commodities. The CCC Charter Act, as amended, also authorizes CCC to sell agricultural commodities acquired under its income and commodity support programs to other government agencies and foreign governments (generating program receipts). CCC’s net outlays for the commodity export programs funded through its borrowing authority were about $391.2 million and $235.7 million for fiscal years 1996 and 1997, respectively. CCC’s export programs, including those funded by appropriations, help develop new foreign markets and increase the U.S. share in existing markets. For example, some programs provide credit guarantees that allow other countries to obtain commercial financing to purchase U.S. commodities; some provide exporters with cash or commodity bonuses in order to make U.S. commodities more price competitive in foreign markets; and yet another program provides government-to-government concessional sales of U.S. commodities, including lengthy repayment terms at low interest rates. CCC’s net outlays for its resource conservation programs were about $8.5 million and $1.8 billion in fiscal years 1996 and 1997, respectively. Recently added to CCC’s mission, these conservation programs became CCC programs in April 1996, following the passage of the Federal Agriculture Improvement and Reform Act of 1996 (P.L. 104-127, Apr. 4, 1996)—more commonly known as the 1996 farm bill. Several of these programs were created by the farm bill; others were previously funded through appropriations and administered by FSA or NRCS. Under some of the resource conservation programs, CCC purchases easements or rents cropland from agricultural land users in order to retire environmentally sensitive land from agricultural production or to preclude nonagricultural uses of the land. Under these and other CCC conservation programs, the Corporation may also share the cost of implementing conservation practices with agricultural land users through direct payments or low-cost loans. CCC’s net outlays for disaster assistance programs were about $127 million and $226.1 million in fiscal years 1996 and 1997, respectively. CCC’s disaster assistance programs provide a safety net to indemnify producers for extraordinary losses they may incur as a result of weather-related disasters, such as droughts or blizzards. In addition, the funding for these years included about $32.3 million in fiscal year 1996 and $29.3 million in fiscal year 1997 in emergency funding for activities to control and eradicate (1) a grain fungus, known as Karnal bunt, that affected wheat production in the southwestern United States and (2) an infestation of fruit flies that affected fruit and vegetable production in California. Appendixes III through VI provide additional information on each of CCC’s income and commodity support, commodity export, resource conservation, and disaster assistance programs, including each program’s purpose and its net outlays for fiscal years 1996 and 1997. CCC’s administrative expenses include (1) the purchase of computer and telecommunications equipment and services and (2) reimbursements to agencies within USDA and other government entities for services they provide to support CCC’s operations. CCC’s net outlays for computer and telecommunications equipment and services were about $77.5 million and $73.8 million in fiscal years 1996 and 1997, respectively. Its net reimbursements to other government agencies were about $41.6 million and $33.7 million, respectively, for these years, including about $11.8 million each year to FAS as payment for FAS’ costs in operating a computer facility for CCC. With regard to CCC’s spending from appropriations (excluding payments made to the Department of the Treasury to repay borrowing) and other funding, CCC’s aggregate net outlays totaled about $337.5 million in fiscal year 1996 and $41.3 million in fiscal year 1997. These totals included net outlays of about $334.4 million and $38.7 million in fiscal years 1996 and 1997, respectively, for the export programs that received direct appropriations and other funding. They also included net outlays of about $3.1 million and $2.6 million in these years, respectively, made with funds CCC was authorized to use from USDA’s appropriations for hazardous waste activities. CCC also had outlays of $139.5 million for net interest payments in fiscal year 1996 related to repaying its debt with the Treasury. CCC did not have outlays for net interest payments in fiscal year 1997 because its interest receipts exceeded its interest outlays by approximately $118.4 million. CCC’s interest receipts derived from the interest paid by producers on their commodity loans and the interest earned on funds CCC had on deposit with the Treasury. CCC uses a variety of management practices to control its funds: (1) controls over spending related to the annual budget and apportionment processes, (2) periodic reporting of its financial activities to the Congress, (3) FSA’s implementation of internal controls to protect CCC’s assets and account for its financial transactions, (4) program managers’ allocation and monitoring of CCC funds, and (5) periodic reviews of program activity by compliance staff from agencies responsible for implementing CCC programs. In addition, USDA’s Office of Inspector General (OIG) audits CCC’s annual financial statements, including its year-end expenditure reports. As a government-owned corporation, CCC is required to prepare a budget for each fiscal year in accordance with the provisions of the Government Corporation Control Act of 1945, as amended (31 U.S.C. 9103). This budget serves as a general operating plan that guides CCC’s spending. The budget, prepared by FSA’s Budget Division on behalf of CCC, is reviewed by USDA’s Office of Budget and Program Analysis as well as by the Office of Management and Budget (OMB). The budget is submitted to the Congress as part of the President’s annual budget submission. In reviewing CCC’s budget, the Congress may question some proposed expenditures. If the questioned expenditures concern one of CCC’s mandatory programs, the Congress must pass legislation to preclude CCC from using its funds for this program. On the other hand, if the questioned expenditures concern one of CCC’s appropriated programs, the Congress determines the amount of funds available to the program in USDA’s annual appropriations act. As discussed, CCC’s budget serves as a general operating plan that guides the Corporation’s spending. The planned expenditures in the budget—particularly with regard to CCC’s mandatory programs—are considered to be no more than estimates. For example, spending for some income and commodity support programs depends on variables—such as the weather, economic conditions, and commodity market prices—that are difficult to predict. Thus, CCC’s actual expenditures for these programs may be greater or less than initially estimated. At the same time, however, FSA officials said that CCC can pay out funds only for those programs included in its budget, unless the Congress directs it to do otherwise in legislation. OMB apportions (distributes) the funds available for obligation for selected CCC programs and operating expenditures. The approved apportionment by OMB follows the review and approval of CCC’s funding request by USDA’s Office of Budget and Program Analysis in consultation with appropriate policy officials. OMB apportions the funds available for CCC’s resource conservation programs, for purchasing computer and telecommunications equipment and services, and for reimbursing USDA agencies and other government entities. In addition, since fiscal year 1997, OMB has apportioned the funding for commodity export and disaster assistance programs. In general, funds are apportioned annually at the beginning of a fiscal year. However, OMB may choose to apportion funds on a quarterly or other basis. In addition, CCC may ask OMB to approve a reapportionment of funds during the fiscal year. For each program or operating expense, the amount OMB apportions sets a limit on the funds available for obligation and subsequent outlays. OMB’s apportionments also serve as a check to ensure CCC’s compliance with statutory funding caps or other legislatively mandated funding limitations. For example, provisions in the 1996 farm bill limited CCC’s funding for computer and telecommunications equipment and services to a maximum of $170 million in fiscal year 1996 and $275 million for fiscal years 1997 through 2002. In addition, funding for the reimbursement of agencies within USDA and other government entities for their support of CCC programs was capped at $45.6 million a year starting with fiscal year 1997. Furthermore, USDA’s annual appropriations legislation sometimes sets additional limits on funding for specific programs, as was the case with CCC’s Farmland Protection Program in fiscal year 1997. CCC issues two reports to the Congress on its financial activities. The first—CCC’s annual report—is required by the Government Corporation Control Act, as amended. This report provides an overview of the Corporation’s purpose, mission, and goals; financial and program summaries; and performance measures. The report also contains CCC’s financial statements and accompanying notes and an OIG opinion letter on the OIG audit of CCC’s financial statements. The second report, a quarterly expenditure report known as the Summary Expenditure Report, is required by the CCC Charter Act, as amended. This report provides data on cumulative expenditures for similar products and services for the quarter and fiscal year. Both the annual report and the quarterly expenditure report are prepared by FSA’s Financial Management Division. The Summary Expenditure Report also provides detailed information on administrative expenditures, such as those for (1) purchases of computer and telecommunications equipment and services and (2) reimbursements paid to agencies within USDA and other government entities. For example, for computer and telecommunications purchases, the report lists outlays on a vendor-by-vendor basis, and for reimbursements, the report lists outlays on an agency-by-agency basis. FSA officials said they chose to provide this added level of detail on these types of expenditures to more fully disclose outlays that are subject to statutory funding caps and that therefore may be of particular interest to the Congress. The report is reviewed by USDA’s Office of the Chief Financial Officer before its submission to the Congress. It is also subject to an annual audit by the OIG. FSA’s financial management staff has made further changes to the expenditure report, beginning with fiscal year 1998, in response to concerns raised by the report’s congressional users. Most of these changes relate to the reporting of CCC’s outlays for computer and telecommunications equipment and services. Specifically, FSA has (1) eliminated the vendor-by-vendor detail, (2) included a cumulative total specifically for these outlays (as distinct from other administrative support and property outlays), and (3) added information on apportioned and obligated amounts associated with these outlays. In addition, FSA has added information on the apportioned and obligated amounts associated with outlays for reimbursements paid to USDA agencies and other government entities. FSA’s Financial Management Division has implemented a number of management controls intended to ensure that its accounting and financial management systems accurately reflect CCC’s financial activity and comply with applicable laws and regulations. These controls, also known as internal controls, include policies and procedures intended to provide FSA management with reasonable assurance that assets—such as cash, commodity inventories, computer and telecommunications equipment, and office furniture and supplies—are safeguarded against loss from unauthorized use or disposition. They are also intended to ensure that financial transactions—such as disbursing and collecting cash; authorizing and disbursing commodity loans, credits, and guarantee payments; and processing accounting entries—are executed as authorized by management and recorded properly to permit the preparation of CCC’s annual financial statements, quarterly Summary Expenditure Reports, and other periodic reports. The director of FSA’s Financial Management Division (who also serves as CCC’s controller) and members of his staff (who also serve as CCC’s treasurer and chief accountant) have the primary responsibility for issuing the policies and procedures that constitute the division’s internal control structure. These officials also assist in carrying out and evaluating the effectiveness of these controls. Each CCC program has a designated manager from the USDA agency responsible for implementing the program for CCC. The manager’s duties often include allocating and monitoring the use of program funds. These managers carry out these duties in consultation with their supervisors—usually division directors—and other agency personnel. For example, a manager’s recommended allocations of funds are reviewed by the manager’s supervisor and must usually be approved by the cognizant agency head. Similarly, in monitoring the use of funds, managers often rely on periodic reports summarizing obligations and outlays that are prepared by their agency’s financial management staff. An exception to this are the FSA managers responsible for CCC’s income and commodity support programs, who have little, if any, direct role in allocating or monitoring the use of funds. FSA officials said that because financial assistance under these programs is, in a sense, open-ended, managers of these programs do not manage against a specified funding level. Rather, program participation and, hence, program outlays depend on such variables as weather, economic conditions, and market prices—none of which is readily predictable. All producers who apply and qualify for benefits under these programs will receive them, unless CCC exhausts its $30 billion borrowing authority. However, other FSA managers of CCC programs, including the managers of CCC’s disaster assistance programs, are actively involved in managing the use of funds. For example, in fiscal year 1997, the manager of the Livestock Indemnity Program allocated and monitored the use of the $50 million authorized by the Congress to provide emergency relief to livestock producers in the upper Midwest during a particularly harsh winter. Under this program, FSA state and county office personnel in the affected states evaluated and approved qualified applicants, awarded funds, and reported the associated obligations and outlays through FSA’s financial accounting system. The program manager reviewed weekly reports from FSA’s Financial Management Division that summarized these obligations and outlays to ensure that the $50 million cap, as well as the share of these funds allocated to each affected county, was not exceeded. FAS managers of commodity export programs and NRCS managers of resource conservation programs are also generally involved in allocating and monitoring the use of program funds. For example, the FAS manager of the Market Access Program managed an annual budget of $90 million in fiscal years 1996 and 1997. Under this program, which finances promotional activities to expand the export of U.S. agricultural commodities, the manager evaluates and approves applicants’ proposals, awards funds, reviews subsequent reimbursement requests to ensure they do not exceed the amount of award, and authorizes payments to the appropriate parties. The manager also tracks obligations for this program in an FAS agricultural marketing database and obtains information on program outlays from FSA’s Financial Management Division. Similarly, the NRCS manager of the Wetlands Reserve Program managed a budget of $159.7 million and $137.9 million in fiscal years 1996 and 1997, respectively. This program offers producers payments for wetlands that have previously been drained and converted to agricultural uses. Under this program, the manager, with the approval of the Chief, NRCS, allocates funds by state. NRCS state and county office staff evaluate land offered by producers for enrollment in the program and award funds to purchase easements on the land selected. These staff report the obligations associated with these awards through NRCS’ financial system. The outlays, however, are reported by FSA staff working in these same offices, who pay landowners for the easements purchased, through their agency’s financial system. The program manager receives periodic reports summarizing obligations and outlays from NRCS’ financial management staff. To better ensure that funds are being properly used, the manager of the Wetlands Reserve Program said that he maintains his own database of program obligations that is based on data provided directly to him by his field staff. According to this official, keeping his own tally of obligations allows him to stay current on the program’s financial activity and progress towards meeting its enrollment goals. In addition to the activities of its program managers, NRCS has assigned a program official and a financial official to work in FSA’s Financial Management Division—the office responsible for managing CCC’s financial affairs. The program official works primarily with FSA officials on funding issues, including budget formulation, concerning NRCS’ CCC programs; the financial official works with FSA officials on accounting issues for these programs. According to senior NRCS officials, the assignment of these two staff reflects NRCS’ concern that it not inadvertently misuse CCC funds. The officials noted that working in a CCC-funded environment is still relatively new to NRCS because the agency became responsible for managing CCC-funded programs only after the passage of the 1996 farm bill. Periodically, compliance staff in each of the agencies responsible for administering CCC-funded programs review program activity, including the financial management of these programs. The results of these reviews are generally documented in written reports and sent to the relevant program office for response and corrective action, if necessary. For example, FAS’ compliance review staff conducts a financial and compliance review of each participant in the Market Access Program at least once every 3 years. Among other things, the review is intended to determine whether program expenses reimbursed by CCC were authorized and reasonable and whether the office administering the program has a financial system in place to track CCC’s resources. Annually, USDA’s OIG audits CCC’s comparative financial statements and its end-of-year Summary Expenditure Report. The results of these audits are reported to CCC’s board of directors. In general, the OIG’s objectives in conducting these audits are to determine whether (1) CCC’s financial statements fairly present the Corporation’s financial position, (2) CCC’s internal control structure provides reasonable assurance that specific program goals are achieved, and (3) CCC has complied with the laws and regulations for those transactions and events that could have a material effect on its financial statements. In accordance with USDA’s departmental regulations, CCC is required to reply to the OIG’s reports within 60 days of their issuance. If CCC concurs with the OIG’s findings, it must then describe corrective actions taken or planned and the time frames for implementation. A management decision must also be reached on all findings and recommendations within 6 months of a report’s issuance. During its most recent audit of CCC’s comparative financial statements (fiscal years 1996 and 1995) and its end-of-year expenditure report (fiscal year 1996), the OIG noted several material weaknesses in FSA’s internal controls. For example, the OIG found that FSA’s operations analysis staff was not obtaining operations review reports for the agency’s county offices. The reviews of these offices, whose activities are integral to the implementation of CCC’s income and commodity support, resource conservation, and disaster assistance programs, are conducted periodically by designated FSA state and county employees to identify systemic problems in office operations. According to the OIG, without reviewing compilations of these reports, the operations analysis staff would be unable to detect any nationwide problems that required corrective action and, if material, inclusion in FSA’s report under the Federal Managers’ Financial Integrity Act. In response, the operations analysis staff said that it was obtaining copies of the operations review reports from county offices, but that it lacked the staff resources and automated data processing capability to compile and analyze the reports. However, the staff agreed in principle with the need to do so. The OIG also found that FSA’s financial systems and related accounting procedures are not designed to readily and efficiently compile the data needed to prepare CCC’s Summary Expenditure Report in a timely manner. According to the OIG, these difficulties occur because CCC’s financial systems, which function on an accrual basis of accounting, cannot provide automated information on cash expenditures. Furthermore, the OIG found that the systems are not designed to provide automated data in the level of detail and categories required for the report. As a result, FSA financial management staff must manually extract some data from CCC’s financial systems and perform certain automated and manual referencing procedures to develop cash expenditures. In responding to the OIG’s finding, FSA’s Financial Management Division indicated that it was developing a new accounting system that it believes will significantly improve FSA’s ability to compile expenditure information for the Summary Expenditure Report. However, according to FSA financial management officials, the implementation of this accounting system may not be completed until fiscal year 1999. In addition, these officials said that the limitations of other accounting systems, such as those used by FSA’s disbursing offices, that will “feed” into the new system will continue to cause problems in preparing this report, necessitating some manual preparation of expenditure data. According to the OIG, the material weaknesses it noted in FSA’s internal control structure could adversely affect CCC’s ability to be reasonably assured that its transactions are properly recorded and accounted for so that it can prepare reliable financial statements and maintain accountability over its assets. The OIG also noted that some of these weaknesses were identified in previous audits of CCC’s financial statements. We found no instances in fiscal years 1996 and 1997 in which CCC’s funding for administrative uses exceeded the relevant statutory funding caps. Furthermore, each CCC program has a statutory basis for using CCC funds. We did not, however, perform a detailed review on the propriety of the individual administrative or programmatic transactions made in these years. CCC’s funding for administrative uses related to the purchases of computer and telecommunications equipment and services and the reimbursements paid to agencies within USDA and other government entities—in fiscal years 1996 and 1997—was within relevant statutory funding caps. As discussed, provisions in the 1996 farm bill limited CCC’s funding for computer and telecommunications purchases to a maximum of $170 million in fiscal year 1996 and $275 million for fiscal years 1997 through 2002. Furthermore, as discussed, the funding for the reimbursement of agencies within USDA and other government entities was capped at $45.6 million a year starting with fiscal year 1997. The funding for computer and telecommunications purchases and reimbursements paid to USDA agencies and other government entities is also subject to apportionment by OMB, which may further limit funds available for obligation. For example, USDA officials requested $80.9 million in CCC funds for computer and telecommunications purchases in fiscal year 1997. However, OMB apportioned only $54.8 million for this purpose that year because it believed that all of CCC’s ongoing and high-priority needs related to computer and telecommunications purchases could be met with this lesser amount. Table 1 provides information on the funding cap, apportionment, and obligation amounts associated with CCC’s funding for computer and telecommunications equipment and services in fiscal years 1996 and 1997. Table 2 provides similar information for reimbursement funding in these years. Each of CCC’s income and commodity support, commodity export, resource conservation, and disaster assistance programs has a statutory basis for using the Corporation’s funds to finance program operations. For example, provisions of the 1996 farm bill authorize the use of these funds for each of CCC’s resource conservation programs. Information on the statutory basis for using CCC funds for each CCC program is provided in appendixes III through VI. We provided a draft of this report to USDA for its review and comment. We met with the Administrator, Farm Service Agency, and other officials from USDA’s Foreign Agricultural Service, Farm Service Agency, Natural Resources Conservation Service, Office of Budget and Program Analysis, and Office of General Counsel. The officials agreed that the draft provided a comprehensive, accurate overview of Commodity Credit Corporation’s operations. They provided a number of technical changes and clarifications to the report, which we have incorporated as appropriate. In developing the information for this report, we interviewed and obtained documents from a broad range of USDA officials associated with CCC programs. Specifically, to obtain information on the amount of CCC funds available and spent, we interviewed FSA budget and financial management officials and reviewed relevant documents. To determine how these funds were used, we interviewed program staff in FSA, FAS, and NRCS. We also reviewed CCC’s annual financial reports, Summary Expenditure Reports, and documents related to the Corporation’s compensation of USDA agencies and other government entities for their support of CCC’s operations. To obtain information on the management practices used to control CCC funds, we interviewed and obtained documents from budget, financial, compliance review, and program officials in FSA, FAS, and NRCS as well as from the OIG. To obtain information on whether CCC’s funding for administrative purposes—computer and telecommunications purchases and reimbursements paid to USDA agencies and other government entities—conformed with statutory funding caps in fiscal years 1996 and 1997, we compared the obligations made by CCC for these purposes with the statutory caps and related apportionments by OMB. To obtain information on whether the programs CCC funded had a statutory basis for using CCC funds, our Office of General Counsel reviewed relevant statutes to determine the source of funding for these programs. We conducted our review from June 1997 through April 1998, in accordance with generally accepted government auditing standards. We did not, however, independently verify the accuracy of outlay data related to the operation of CCC programs. We are sending copies of this report to the appropriate congressional committees, interested Members of Congress, the Secretary of Agriculture, the Director of the Office of Management and Budget, and other interested parties. We will also make copies available upon request. If you have any questions, please call me at (202) 512-5138. Major contributors to this report are listed in appendix VII. Member Administrator Farm Service Agency (FSA) Procure, store, transport, and dispose of commodities to support market prices and supply domestic and foreign food programs. ($292.2) ($106.9) 15 U.S.C.A. 714c. Increase competitiveness of U.S. cotton in world markets by making bonus payments to domestic users and exporters of this commodity. 7 U.S.C. 7236; 7281 (Supp. II, 1997). Purchase surplus butter, cheese, and nonfat dry milk from dairy processors to support the price of milk. 7 U.S.C. 7251; 7281 (Supp. II, 1997). Provide income support payments to producers who participated in wheat, feed grains, rice, or cotton programs prior to 1996. (1,118.5) 7 U.S.C. 1441-2; 1444-2; 1444f; 1445b-3a (1994). Provide direct payments to producers who agree not to obtain price support loans for wheat, feed grains, upland cotton, rice, or oilseeds. 7 U.S.C. 7235; 7281 (Supp. II, 1997). (continued) Provide price support loans to producers of wheat, feed grains, cotton, peanuts, tobacco, rice, sugar, and oilseeds. Producers may keep the money borrowed and forfeit the crop they pledged as collateral or repay the loan, depending on market prices. (950.6) 7 U.S.C. 7231; 7281 (Supp. II, 1997). Options Pilot Program Support farm income through options contracts offered to producers of wheat, corn, and soybeans. Program in effect from 1993-2002. 7 U.S.C 7331; 7281 (Supp. II, 1997). Provide price support loans to producers of peanuts. 7 U.S.C 7271, 7281 (Supp. II, 1997). Provide income support payments to producers of selected crops; available through fiscal year 2002. Program is intended to transition producers from deficiency payments. 7 U.S.C. 7211; 7281 (Supp. II, 1997). Provide price support loans to producers of tobacco. 7 U.S.C. 1421; 1445. Provide price support payments to producers of wool and mohair. Program ended 12/31/95. 7 U.S.C. 1782 (1994). (Table notes on next page) Some of the entries in this column are not programs per se but represent significant activities related to CCC’s income and commodity support operations. The Federal Agriculture Improvement and Reform Act of 1996 (P.L. 104-127, Apr. 4, 1996)—also known as the 1996 farm bill—replaced deficiency payments with production flexibility contract payments. The net receipts shown for fiscal year 1997 represent the return (by producers) of advance deficiency payments from prior years. Provide payments to exporters of U.S. dairy products to increase price competitiveness of these products in foreign markets. 15 U.S.C.A. 713a-14. Provide short-term U.S. government financing of commercial exports of U.S. agricultural commodities. 7 U.S.C. 5621. Sell dairy products from U.S. government inventory to foreign governments or private importers, consistent with the obligations of multilateral trade agreements. 7 U.S.C 1731 note. Provide donations of surplus CCC-owned commodities to developing countries. 7 U.S.C. 1431b. Provide technical assistance to private and public organizations for projects designed to develop or expand foreign markets for U.S. agricultural commodities. 7 U.S.C. 5622 (Supp. II, 1997). Provide U.S. Government guarantees for repayment of private, short- and intermediate-term credit to promote the export of U.S. agricultural commodities and products. 7 U.S.C. 5622 note; 7 U.S.C. 5641b. (continued) Provide payments to exporters to increase price competitiveness of U.S. commodities in foreign markets. 7 U.S.C. 5651e (Supp. II, 1997). Provide government-to- government sales of U.S. commodities on concessional terms (Title I) and donations and/or grants of commodities (Titles II & III). Program is targeted to developing countries to (1) combat hunger and malnutrition and (2) develop and expand foreign markets for U.S. commodities. FAS (Title I) 7 U.S.C. 1736 (1994) (Supp. II, 1997). Agency for International Development (Titles II & III) Provide direct financing or grants of U.S. agricultural commodities to developing countries and emerging democracies. 7 U.S.C. 1736o. Market Access Program Provide cost-share payments to eligible trade organizations that implement programs to develop or expand foreign markets for U.S. commodities. 7 U.S.C. 5623; 5641(c) (Supp. II, 1997). (Table notes on next page) This program is authorized under section 416(b) of the Agricultural Act of 1949 and is commonly referred to as the section 416(b) program. CCC administers four export credit guarantee programs: (1) Supplier Credit Guarantee Program—CCC guarantees a portion of the financing that exporters have extended directly to importers for up to 180 days; (2) Export Credit Guarantee Program—CCC guarantees credit extended by private banks or exporters for up to 3 years; (3) Intermediate Export Credit Guarantee Program—CCC guarantees credit extended by private banks or exporters for up to 10 years; and (4) Facility Guarantee Program—CCC guarantees credit for financing manufactured goods and services exported from the United States to improve or establish facilities for handling, marketing, processing, storage, or distribution of imported agricultural commodities or products in emerging markets. Consolidate payments for production flexibility contracts and the Conservation Reserve, Wetlands Reserve, and Environmental Quality Incentives Programs into one payment for eligible producers who agree to (1) forgo income and commodity support payments for 10 years and (2) adopt a conservation farm plan. 16 U.S.C.A. 3839bb. Provide land rental payments, for 10 to 15 years, to producers who agree to convert environmentally sensitive land to approved vegetative cover (usually grass or trees). Program also offers cost-share assistance to establish vegetative cover on enrolled land. 16 U.S.C.A. 3834; 3841a. Provide cost-share and technical assistance to producers who agree to enter into 5 to 10 year contracts to implement conservation practices, such as livestock waste containment. 16 U.S.C.A. 1341b. (continued) Provide assistance to states with existing farmland protection programs to purchase conservation easements. 16 U.S.C.A. 3830, note. Provide payments to owners of farmland with high flood potential if the owner agrees to forgo certain income and commodity support payments. 7 U.S.C. 7334 (Supp. II, 1997). Provide land rental or restoration cost-share payments to producers who permanently return converted or farmed wetlands to prior condition. 16 U.S.C.A. 3841a. Provides cost-share payments to producers who develop or improve wildlife habitat on their land. 16 U.S.C.A. 3836a; 3841a(1). Provide payments to commodity producers for losses resulting from natural disasters. 7 U.S.C. 1421 note (1994). Provide payments to livestock producers for losses of feed grain crops, forage, and grazing resulting from natural disasters. 7 U.S.C. 1427a (1994). Provide partial reimbursement to livestock producers for losses of animals resulting from natural disasters. 7 U.S.C. 1427a (1994). P.L. 105-18, June 12, 1997; P.L. 105-86, Nov. 18, 1997; P.L. 105-119, Nov. 26, 1997. Provide assistance to livestock producers for losses of feed or livestock due to natural disasters. 7 U.S.C. 1471; 1427 (1994). Provide crop-loss payments to producers of commodities not covered by the Federal Crop Insurance Program. 7 U.S.C. 7333 (Supp. II, 1997). Combines crop disaster payments for multiple crops and years. Disaster assistance under this program was suspended for fiscal years 1996 through 2002 by the 1996 farm bill. The amounts shown in the table reflect outlays related to obligations made prior to fiscal year 1996. Combines funding for Emergency Feed and Livestock Emergency Assistance Programs. Assistance under these programs was suspended for fiscal years 1996 through 2002 by the 1996 farm bill. The amounts shown in the table reflect outlays related to obligations made prior to fiscal year 1996. Department of Agriculture, Farm Service Agency and Commodity Credit Corporation: Conservation Reserve Program — Long Term Policy (GAO/OGC-97-26, Mar. 6, 1997). Financial Audit: Commodity Credit Corporation’s Financial Statements for 1989 and 1988 (GAO/AFMD-91-5, July 29, 1991). Food Assistance: USDA’s Implementation of Legislated Commodity Distribution Reforms (GAO/RCED-90-12, Dec. 5, 1989). Commodity Credit Corporation’s Export Credit Guarantee Programs (GAO/T-NSIAD-89-2, Oct. 6, 1988). International Trade: Commodity Credit Corporation’s Export Credit Guarantee Programs (GAO/NSIAD-88-194, June 10, 1988). USDA’s Commodity Program: The Accuracy of Budget Forecasts (GAO/PEMD-88-8, Apr. 21, 1988). International Trade: Commodity Credit Corporation’s Refunds of Export Guarantee Fees (GAO/NSIAD-87-185, Aug. 19, 1987). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on how Commodity Credit Corporation (CCC) funds are spent and controlled, focusing on: (1) how much money CCC had available and spent in fiscal years (FY) 1996 and 1997, including the sources of these funds and the programs and activities for which they were used; (2) the management practices used to control CCC funds; (3) whether CCC's funding for administrative purposes fell within relevant statutory funding caps; and (4) whether the programs CCC funded had a statutory basis for using CCC funds. GAO noted that: (1) the amount of funds available to CCC through its $30-billion borrowing authority fluctuates as it alternately borrows against and replenishes the authority every business day; (2) to enable CCC to repay its debt associated with the borrowing authority, Congress made appropriations to CCC totalling $10.5 billion in FY 1996 and $1.5 billion in FY 1997; (3) CCC also received about $6.9 billion and $5.7 billion in program receipts--in FY 1996 and FY 1997, respectively--that is also used to replenish its borrowing authority; (4) in addition, CCC received separate appropriations and other funding totalling $2.1 billion in FY 1996 and $1.9 billion in FY 1997 to fund several of its commodity export programs that are not funded through its borrowing authority; (5) most of CCC's net outlays made through its borrowing authority were for its income and commodity support programs--about $4.4 billion and $5.1 billion, in FY 1996 and FY 1997, respectively; (6) the remaining outlays made in these years--about $640 million and $2.3 billion--were primarily for CCC's other programs; however, some were used for administrative purposes, such as purchasing computer and telecommunications equipment and reimbursing Department of Agriculture (USDA) agencies and other government entities for services provided to support CCC's operations; (7) in addition to the net outlays associated with its borrowing authority, CCC had net outlays of about $334.4 million in FY 1996 and $38.7 million in FY 1997 for the commodity export programs that received direct appropriations and other funding; (8) a range of management practices are used to control CCC's funds; (9) these practices include: (a) controls over spending related to the annual budget and apportionment process; (b) CCC's periodic reports of its financial activities to Congress; (c) the Farm Service Agency's implementation of internal controls to protect CCC's assets and account for its financial transactions; (d) program managers' allocation and monitoring of CCC's funds used in their programs; and (e) periodic reviews of program activity by compliance staff from the agencies that implement CCC's programs; (10) in addition, the USDA's Office of Inspector General (OIG) audits CCC's annual financial statements, including its year-end expenditure reports; and (11) in a July 1997 report, the OIG noted problems with some of the Farm Service Agency's internal controls which it believes could adversely affect CCC's ability to prepare reliable financial statements and account for its assets.
The 1988 amendments to CSLA established the current U.S. policy to provide federal payment, subject to appropriations—known as indemnification—for a portion of claims by third parties for injury, damage, or loss that result from a commercial launch-related incident. All FAA- licensed commercial launches and reentries by U.S. companies, whether unmanned or manned and from the United States or overseas, are covered by federal indemnification for third party damages that result from the launch or reentries. Parties involved in launches—for example, passengers and crew—are not eligible for indemnification coverage. U.S. indemnification policy has a three-tier approach for sharing liability between the government and the private sector to cover third party claims: The first tier of coverage is the responsibility of the launch company and is handled under an insurance policy purchased by the launch company. As part of FAA’s process for issuing a license for a commercial launch or landing, the agency determines the amount of third party liability insurance a launch company is required to purchase so the launch company can compensate third parties for any claims for damages that occur as a result of activities carried out under the license. FAA calculates the insurance amount to reflect the maximum probable loss that is likely to occur because of an accident that results in third party damages, including deaths and injuries on the ground and damage to property from spacecraft debris. FAA uses a statistical approach to estimate expected losses based on estimated probabilities that a catastrophic incident could occur and the estimated costs of a catastrophic incident given the details of the specific launch. This first tier of required insurance coverage is capped at a maximum of $500 million for third party damages. The second tier of coverage is provided by the U.S. government, and it covers any third party claims in excess of the specific first tier amount up to a limit of $1.5 billion adjusted for post-1988 inflation; in 2013, the inflation-adjusted amount was approximately $3 billion. For the federal government to be liable for these claims, Congress would need to appropriate funds. This second tier of coverage will expire in December 2016 unless Congress extends this date.other two tiers have no expiration date.) The third tier of coverage is for third party claims in excess of the second tier—that is, the federal coverage of $1.5 billion above the first tier, adjusted for inflation. Like the first tier, this third tier is the responsibility of the launch company, which may seek insurance above the required first tier amount for this coverage. Unlike the first tier, no insurance is required under federal law. Another component of U.S. indemnification policy for commercial space launches and reentries is cross waivers. They provide that each party involved in a launch (such as the launch company, the spacecraft manufacturer, and the customer) agrees not to bring claims against the other parties and assumes financial responsibility for damage to its own property or loss or injury sustained by its own employees.also do not have an expiration date. According to FAA, no FAA-licensed commercial space launch since 1989 has resulted in casualties or substantial property damage to third parties. In the event of a third party claim that exceeded the launch provider’s first-tier coverage, FAA would be involved in any negotiations, according to FAA officials, and the Secretary of Transportation must approve any settlement. From 2002 through 2012, U.S. companies conducted approximately 16 percent of commercial space launches worldwide, while Russia conducted 42 percent and France’s launch company conducted 25 percent. Figure 2 shows the trend in number of commercial space launches over the last 11 years. Over the past several years Russian and French launches have generated the most revenues, followed by U.S. launches. In 9 of the last 11 years, U.S. commercial launch companies generated less revenue than launches in either Russia or France. U.S. companies generated no commercial launch revenue in 2011 because they conducted no launches. (See fig. 3.) As of July 2012, the United States provided less total third party liability coverage than China, France, or Russia—the primary countries that have conducted commercial space launches in the last 5 years—according to published reports. These countries each had an indemnification regime in which the government states that it will assume a greater share of the risk compared to that of the United States because each country had a two-tiered system with no limit on the amount of government indemnification. By comparison, the United States caps government indemnification at $1.5 billion adjusted for inflation beyond the first-tier insurance amount. However, U.S. government coverage, in some cases, begins at a lower level than that of the other countries because U.S. coverage begins above the maximum probable loss, which averaged about $82 million for active FAA launch and reentry licenses as of 2012, and ranged from about $3 million to $267 million. The level at which government coverage begins for the other three countries ranged from $79 million to $300 million. China, France, and Russia had a first tier of insurance coverage that a commercial launch company must obtain, similar to the United States. The second tier of government indemnification varied for these countries: The Chinese government provided indemnification for third party claims over $100 million. The French government provided indemnification for third party claims over 60 million euros (about $75 million as of May 2012). The Russian government provided indemnification for third party claims over $80 million for the smaller Start launch vehicles and $300 million for the larger Soyuz and Proton vehicles. For all these countries, their commitments to pay have never been tested. Globally, there has never been a third party claim for damages from a commercial space launch failure that reached second-tier government coverage. The federal government’s potential costs under CSLA depend on (1) the occurrence of a catastrophic launch failure with third party claims that exceed the first tier of coverage and (2) Congress appropriating funds to cover the government’s liability under the second tier of coverage. FAA officials stated that no FAA-licensed commercial space launches or reentries have resulted in casualties or substantial property damage to third parties. As a result, FAA believed that it is highly unlikely that there will be any costs to the federal government under CSLA. In the event that a catastrophic failure did occur, FAA’s maximum probable loss calculation was intended to estimate the maximum losses likely to occur from a commercial space launch and determine the amount of third party losses against which launch companies must protect. In calculating maximum probable loss, FAA aimed to include estimates of losses from events having greater than a 1 in 10 million chance of occurring, meaning that losses are very unlikely to exceed launch companies’ private insurance and become potential costs for the government under CSLA. Under CSLA, if a rare catastrophic event were to occur which resulted in losses exceeding private insurance coverage, the government would be responsible for paying claims that exceeded FAA’s maximum probable loss only if Congress provided appropriations for this purpose. Under CSLA, the federal government does not incur a legal liability unless an appropriation is made for this purpose. Accordingly, an obligation would not be recorded in the federal budget unless and until such an appropriation is made. While an obligation is not incurred or recorded for potential CSLA losses until an appropriation is provided, some insurance companies told us that they expect the government to pay losses that become eligible for coverage under CSLA. While it is very difficult to assess catastrophic failures that have low probabilities but potentially high losses, FAA’s use of an appropriate process for determining the maximum probable loss is important because the maximum probable loss sets the point at which losses become potential costs to the government under CSLA. For our July 2012 report, we identified several issues that raised questions about the soundness of FAA’s maximum probable loss methodology: FAA used a figure of $3 million when estimating the cost of a single potential casualty—that includes either injury or death—which FAA officials said had not been updated since they began using it in 1988. Two insurers, as well as representatives of two companies that specialize in estimating damages from catastrophic events (modeling companies), said that this figure is likely understated. Because this number had not been adjusted for inflation or updated in other ways, it may not adequately represent the current cost of injury or death caused by commercial space launch failures. Having a reasonable casualty estimate can affect FAA’s maximum probable loss calculation and could affect the potential cost to the government from third party claims. FAA’s methodology for determining potential property damage from a commercial space launch started with the total cost of casualties and added a flat 50 percent to that cost as the estimate of property damage, rather than specifically analyzing the number and value of properties that could be affected in the event of a launch failure. One insurer and two risk modelers said that FAA’s approach is unusual and generally not used to estimate potential losses from catastrophic events. For example, officials from both modeling companies noted that the more common approach is to model the property losses first and derive the casualty estimates from the estimated property losses. For example, if a property loss scenario involves the collapse of a building, that scenario would have a different casualty expectation than a scenario that did not involve such a collapse. One modeler stated that FAA’s method might significantly understate the number of potential casualties, noting that an event that has a less than 1 in 10 million chance of occurring is likely to involve significantly more casualties than predicted under FAA’s approach. Moreover, a 2007 FAA review conducted with outside consultants said that this approach is not recommended because of observed instances where casualties were low yet forecasted property losses were very large. More broadly, FAA’s method did not incorporate what is known in the insurance industry as “catastrophe modeling.” One modeler told us that catastrophe modeling has matured over the last 25 years—as a result of better data, more scientific research, and advances in computing—and has become standard practice in the insurance and reinsurance industries. Catastrophe models consist of two components: a computer program that mathematically simulates the type of event being insured against and a highly detailed database of properties that could potentially be exposed to loss. Tens of thousands or more computer simulations are generated to create a distribution of potential losses and the simulated probability of different levels of loss. In contrast, FAA’s method involves estimating a single loss scenario. FAA officials told us that they had considered the possibility of using a catastrophe model. However, they expressed concern about whether the more sophisticated approach would be more accurate, given the great uncertainty about the assumptions, such as the probability and size of potential damages that must be made with any model. Also, industry experts told us that a significant cost factor in catastrophe modeling is creating and maintaining a detailed database of exposed properties. One expert told us that in order for FAA to do such modeling, it would need to purchase a property exposure database, which could cost hundreds of thousands of dollars. Experts also disagreed on how feasible it would be to mathematically model the potential damages associated with space launches. One expert thought such modeling would not be credible because the necessary knowledge of the factors that can influence a space launch is not at the same level as the more developed research for modeling hurricanes, for example. Another expert thought that it would be possible to develop credible space launch simulation models. Another expert stated that such models have not been developed to date because of the government-provided indemnity coverage; this expert believed that if such coverage were the responsibility of the private sector, the necessary models might be developed. FAA officials also said that they believed the maximum probable loss methodology is reasonable and produces conservative results for several reasons. First, FAA officials described a 2002 study on aviation casualty costs to support its use of a $3 million casualty figure for its calculation. Use of a casualty estimate that is based on 2002 data, however, still raises questions about whether this figure is outdated, which could result in underestimating the cost of casualties. Second, to support basing the potential cost of property damage on the potential cost of casualties, FAA officials said that they have conducted internal analyses using alternative methodologies—including some that assessed property values in the vicinity of launches—and compared them to their current methodology. In each case, officials said that the current methodology produced higher, or more conservative, maximum probable losses. We were unable to review or verify these analyses, however, because FAA officials said that these analyses were done informally and were not documented. FAA officials acknowledged that updating the $3 million casualty figure and conducting analyses of potential property damage (rather than using a casualty cost adjustment factor of 50 percent) might produce more precise estimates of maximum probable losses. However, they said that because the probabilities assigned to such losses are still rough estimates, whether taking these actions would increase the accuracy of their maximum probable loss calculations is uncertain. Overall, they said, use of more sophisticated methodologies would have to be balanced with the additional costs to both FAA and the launch companies that would result from requiring and analyzing additional data. For example, a new methodology might require either FAA or the launch company to gather current property information, and might necessitate that FAA construct a statistical model for analyzing potential losses. The same officials noted that they periodically evaluated their current maximum probable loss methodology, but acknowledged that they have not used outside experts or risk modelers for this purpose. They agreed that such a review could be beneficial, and that involvement of outside experts might be helpful for improving their maximum probable loss methodology. FAA’s 2007 review of potential alternatives identified a number of criteria for a sound maximum probable loss methodology that could be useful in such a review. These included, among other things, that the process use a valid risk analysis, be logical and lead to a rational conclusion, and avoid being overly conservative or under conservative. A sound maximum probable loss calculation can be beneficial to both the government and launch companies because it can help ensure that the government is not exposed to greater costs than intended (such as might occur through an understated maximum probable loss) and help ensure that launch companies are not required to purchase more insurance coverage than necessary (such as might occur through an overstated maximum probable loss). In our July 2012 report, we recommended that FAA take steps to better ensure the accuracy of the process it uses to determine amount of insurance coverage required for an FAA launch license by reviewing and periodically reassessing its maximum probable loss methodology— including the reasonableness of the assumptions used. For these reviews, we recommended that FAA consider using external experts such as risk modelers, document the outcomes, and adjust the methodology, as appropriate, considering the costs. In January 2014, FAA officials told us about their recent efforts to reassess the methodology. First, officials have begun to implement an internal effort to develop an improved methodology for determining maximum probable loss. While budget constraints limited progress in 2013 to work with a contractor on the new methodology, the passage of the Consolidated Appropriations Act of 2014 in January 2014 provides FAA with resources to fund the effort which they say they intend to do beginning in March 2014. Second, FAA solicited input from FAA’s Commercial Space Transportation Advisory Committee on how to best conduct an external review of their methodology. In January 2014, FAA officials said they held an initial meeting in January 2013 to begin this process, but as of January 2014, they still did not have funds available to solicit an outside review. In our prior review, some insurers and brokers suggested that the maximum amount of private sector third party liability coverage the industry is currently willing to provide was generally around $500 million per launch. This amount, or capacity, is determined by the amount of their own capital that individual insurers are willing to risk by selling this type of coverage. According to some insurers and brokers with whom we spoke, commercial space launch third party liability coverage is a specialized market involving a relatively small number of insurers that each assumes a portion of the risk for each launch. One broker said that no launch company thus far had pursued private sector insurance protection above $500 million. Two insurers said that there might be slightly more coverage available beyond $500 million, and one said that up to $1 billion per launch in liability coverage might be possible in the private insurance market. For this statement, we contacted one of those insurers, who indicated that current capacity is still approximately $500 million. The cost to launch companies for purchasing third party liability insurance, according to some brokers and one insurer, was approximately 1 percent or less of the total coverage amount. According to FAA data on commercial launches, the average maximum probable loss is about $82 million. As a result, in the absence of CSLA indemnification, insurers could still provide some of the coverage currently available through the government under CSLA. For example, if the maximum probable loss for a launch is $100 million and the insurance industry is willing to offer up to $500 million in coverage, the private market could potentially provide $400 million in additional coverage. According to some insurers, brokers, and insurance experts with whom we spoke, there were a number of reasons why private sector insurers were generally unwilling to offer more third party liability coverage than $500 million per launch. First, these brokers and insurers said that worldwide capacity for third party liability coverage was generally limited to $500 million per launch, which some considered a significant amount of coverage and a challenging amount to put together—particularly given that the number of insurers in the space launch market was relatively small. Second, according to these same officials, insurers were unwilling to expose their capital above certain amounts for coverage that at least currently brings in small amounts of premium relative to the potential payouts for losses. For example, they said that losses from a catastrophic launch accident could exceed many years of third party liability policy premiums and jeopardize insurers’ solvency. Third, according to some insurers and brokers with whom we spoke, to have sufficient capital to pay for losses above $500 million per launch would require insurers to charge policy premiums that would likely be unaffordable for space launch companies. The current amount of private market capacity could change due to loss events and changing market conditions, according to some insurance industry participants. Some insurers and brokers said that a launch failure could affect the level and cost of coverage offered, and that a launch failure with significant losses could quickly raise insurance prices and reduce capacity, potentially below levels required by FAA’s maximum probable loss calculation. However, one risk expert suggested that a space launch failure would likely cause liability insurance rates to rise and that this might encourage insurers and capital to enter the space launch market and cause liability insurance capacity to increase. According to FAA, insurers have paid no claims for U.S. commercial launches to date, but they have paid some relatively small third party claims for U.S. military and NASA launch failures. For example, according to an insurance broker, a U.S. Air Force launch failure in 2006 resulted in property damage of approximately $30 million. According to NASA, the Space Shuttle Columbia accident in 2003 resulted in property damage of approximately $1.2 million. Two brokers said that given the low number of launches and low probability of catastrophic events, total worldwide premiums for space liability coverage are approximately $25 million annually, amounts insurers believe are adequate to cover expected losses. However, if a large loss occurs, according to two insurers, they would likely increase their estimates of the potential losses associated with all launches. Under CSLA, launch companies must purchase coverage to meet FAA’s maximum probable loss amount or purchase the maximum amount of coverage available in the world market at reasonable cost, as determined by FAA. The potential cost to the government could increase if losses caused insurance prices to rise and insurance amounts available at reasonable cost to decrease. Some insurers and brokers also said that the amount of insurance the private market is willing to sell for third party liability coverage for space launches can also be affected by changes in other insurance markets. For example, large losses in aviation insurance or in reinsurance markets could decrease the amount of capital insurers would be willing to commit to launch events because losses in the other markets would decrease the total pools of capital available. While we had not conducted specific work to analyze the feasibility of alternative approaches for providing coverage currently available through CSLA, FAA and others had looked at possible alternatives to CSLA indemnification and we have examined different methods for addressing the risk of catastrophic losses associated with natural disasters and acts of terrorism.probability of occurrence but potentially high losses. Some methods involve the private sector, including going beyond the traditional insurance industry, in providing coverage, and include the use of catastrophe bonds or tax incentives to insurers to develop catastrophe surplus funds. Other methods aid those at risk in setting aside funds to cover their own and possibly others’ losses, such as through self- These events, like space launch failures, have a low insurance or risk pools. Still other methods, such as those used for flood and terrorism insurance, involve the government in either providing subsidized coverage or acting as a backstop to private insurers. Use of any such alternatives could be complex and would require a systematic consideration of their feasibility and appropriateness for third party liability insurance for space launches. For example, according to a broker and a risk expert, a lack of loss experience complicates possible ways of addressing commercial space launch third party liability risk, and according to another risk expert, any alternative approaches for managing this risk would need to consider key factors, including the number of commercial space launch companies and insurers and annual launches among which to spread risk and other associated costs; lack of launch and loss experience and its impact on predicting and measuring risk, particularly for catastrophic losses; and potential cost to private insurers, launch companies and their customers, and the federal government. As such, alternatives could potentially require a significant amount of time to implement. Planned increases in manned commercial launches raises a number of issues that have implications for the federal government’s indemnification policy for third party liability, according to insurance officials and experts with whom we spoke. NASA expects to begin procuring manned commercial launches to transport astronauts to the ISS in 2017. In addition, private companies are also developing space launch vehicles that could carry passengers for space tourism flights. First, the number of launches and reentries covered by federal indemnification will increase with NASA’s planned manned launches NASA expected to which will be FAA-licensed commercial launches. procure from private launch companies 2 manned launches per year to the ISS from 2017 to 2020. In addition, the development of a space tourism industry may also increase the number of launches and reentries covered by federal indemnification, but the timing of tourism launches and reentries is uncertain. According to insurance company officials with whom we spoke, the potential volume of manned launches and reentries for NASA and for space tourism could increase the overall amount of insurance coverage needed by launch companies, which could raise insurance costs, including those for third party liability. By increasing the volume of launches and reentries, the probability of a catastrophe occurring is also increased and any accident that occurs could also increase future insurance costs, according to insurance company officials with whom we spoke. A catastrophic accident could also result in third party losses over the maximum probable loss, which would invoke federal indemnification. Second, because newly developed manned launch vehicles have less launch history they are viewed by the insurance industry as more risky than “legacy” launch vehicles. Insurance company officials told us that launch vehicles such as United Launch Alliance’s Atlas V, which launches satellites and may be used for future manned missions, is seen as less risky than newer launch vehicles, such as SpaceX’s Falcon 9, which could also be used for manned missions. According to insurance company officials with whom we spoke, they expect to charge higher insurance premiums for newly developed launch vehicles than legacy launch vehicles given their different risk profiles. Insurance company officials’ opinions varied as to when a launch vehicle is deemed reliable— from 5 to 10 successful launches. They also told us that whether vehicles are manned is secondary to the launch vehicle’s history and the launch’s trajectory—over water or land—in determining risk and the price and amount of third party liability coverage. Third, having any people on board a space vehicle raises issues of informed consent and cross waivers, which could affect third party liability and the potential cost to the federal government. CSLA requires passengers and crew on spaceflights to be informed by the launch company of the risks involved and to sign a reciprocal waiver of claims (also called a cross waiver) with the federal government—which means that the party agrees not to seek claims against the federal government if an accident occurs. CSLA also requires cross waivers among all involved parties in a launch. Two key issues dealing with cross waivers include the estates of spaceflight passengers and crew and limits on liability for involved parties. The estates of spaceflight passengers and crew, which are considered third parties to a launch, are not covered by the informed consent and cross waiver of claims, according to two insurance companies and one legal expert. Although an insurance company said that it would be difficult for estates to seek damages in case of an accident, the legal expert said that the informed consent requirement does not address future litigation issues. Officials from two Insurance companies and one expert told us that they expect spaceflight passengers to be high-income individuals, which could result in large insurance claims by estates of the passengers, as determination of the amount of claims is based on an individual’s expected earning capacity over his or her lifetime. According to two insurance companies and two legal experts, requiring cross waivers among passengers, crew, the launch company, and other involved parties may not minimize potential third party claims as they would not place limitations on liability. An insurance company and a legal expert stated that, without a limitation on liability, insurance premiums for third party and other launch insurance coverage could increase as the same small number of insurance companies insures passengers, crew, launch vehicles, as well as third parties to a launch. According to FAA, putting a limitation on spaceflight passenger liability could foster the development of the commercial space launch industry through lower costs for insurance and liability exposure. Liability exposure and the related litigation impose costs on industries and the limitation on liability shifts the risk to spaceflight passengers, who have been informed of the launch risks. If limitations on liability were set by federal legislation, it could conflict with state law because at least five states had their own space liability and indemnity laws limiting liability. Launch and insurance companies believe that a limit or cap on passenger liability could decrease uncertainty and consequently decrease the price of insurance, according to a FAA task force report. As previously discussed, the potential cost to the government depends on the accuracy of the maximum probable loss calculation, which assesses a launch’s risk. If the calculation is understated, then the government’s exposure to liability is higher. Thus, whether the launch vehicle is newly developed or manned, the effect on the government’s potential cost for third party claims is still based on how accurately the maximum probable loss calculation assesses launch risks. FAA officials told us that they intend to use the same maximum probable loss assessment method for manned launches as they currently do with unmanned launches. Officials from the insurance industry and space launch companies and an expert told us that a gap in federal indemnification was the lack of coverage of on-orbit activities—that is, activities not related to launch or reentry, such as docking with the ISS and relocating a satellite from one orbit to another orbit—but they did not agree on the need to close this gap. FAA licenses commercial launches and reentries, but does not license on-orbit activities. Federal indemnification only applies to FAA- licensed space activities. NASA’s commercial manned launches to the ISS that will involve on-orbit activities, including docking with the ISS, will be subject to the cross waivers of liability required by agreements with participating countries. This cross waiver is not applicable when CSLA is applicable, such as during a licensed launch or reentry, and it does not address liability for damage to non-ISS parties such as other orbiting spacecraft. Claims between NASA and the launch company are not affected by the ISS cross waiver and are historically addressed as a contractual agreement. In addition, one commercial space launch company’s operations will only have suborbital launches and reentries and no on-orbit activities that require regulation. Officials from two launch companies stated that they did not believe that on-orbit activities need to be regulated by FAA or that federal indemnification coverage should be provided. However, one insurer noted that other proposed manned launches—such as one company’s planned on-orbit “hotel”—will not be NASA related and therefore will not be covered by any regulatory regime. An expert noted that such a proposal for an on-orbit hotel remains an open question regarding regulation and liability exposure. In addition, the expert noted that federal oversight of on-orbit activities may be needed to provide consistency and coordination among agencies that have on-orbit jurisdiction. He pointed out that the Federal Communications Commission and the National Oceanic and Atmospheric Administration have jurisdiction over their satellites and NASA has jurisdiction over the ISS. Thus, according to the expert, there should be one federal agency that coordinates regulatory authority over on-orbit activities. At the time of our July 2012 report, FAA senior agency officials said that they might seek statutory authority over on-orbit activities but as of January 2014 have not done so. An insurer told us that having FAA in charge from launch to landing would help ensure that there were no gaps in coverage. According to this insurer, this would help bring stability to the insurance market in the event of an accident as involved parties would be clear on which party is liable for which activities. However, having FAA license on-orbit activities would increase the potential costs to the federal government for third party claims. If FAA obtains authority to license on- orbit activities then the potential costs to the government may increase as its exposure to risk increases. Based on work for our July 2012 report, the actual effects that eliminating CSLA indemnification would have on the competitiveness of U.S. commercial launch companies are unknown. For example, we do not know how insurance premiums or other costs might change as well as the availability of coverage. In addition, we do not know whether or to what extent launch customers might choose foreign launch companies over U.S. companies. Furthermore, it is difficult to separate out the effects of withdrawing indemnification on the overall competitiveness of the U.S. commercial space launch industry. Many factors affect the industry’s competitiveness, including other U.S. government support, such as research and development funds, government launch contracts, and use of its launch facilities, in addition to the third party indemnification. While the actual effects on competition of eliminating CSLA indemnification are unknown, several launch companies and customers with whom we spoke said that in the absence of CSLA indemnification, increased risk and higher costs would directly affect launch companies and indirectly affect their customers and suppliers. The same participants said that two key factors—launch price and launch vehicle reliability— generally determine the competitiveness of launch companies. According to two launch customers, launch prices for similar missions could vary dramatically across countries. For example, at the time of our July 2012 report two customers said that a similar launch might cost about $40 million to $60 million with a Chinese launch company, about $80 million to $100 million with a French launch company, and approximately $120 million with a U.S. launch company. However, another U.S. launch company told us that it was developing a vehicle for a similar launch for which it intended to charge about $50 million. Other considerations also would be involved in selecting a launch company, according to launch customers with whom we spoke. For example, some said that export restrictions for U.S. customers could add to their costs or prevent them from using certain launch companies. One launch customer also said that it considered the costs of transporting the satellite to the launch site as well as other specific aspects of a given launch. Launch company officials said that the lack of government indemnification would decrease their global competitiveness by increasing launch costs. Launch company officials said their costs would increase as a result of their likely purchase of greater levels of insurance to protect against the increased potential for third party losses, as the launch companies themselves would be responsible for all potential third party claims, not just those up to the maximum probable loss amount. As previously discussed, whether the private insurance market has the capacity to provide coverage at levels currently provided by the government, or at what price they might sell such coverage, is uncertain. Some launch company officials said that their costs may also increase if their suppliers decided to charge more for their products or services as a result of being at greater risk from a lack of CSLA indemnification. That is, to compensate for their greater exposure to potential third party claims, some suppliers might determine that they need to charge more for their products to cover the increased risks they are now assuming. Some launch companies told us that they would likely pass additional costs on to their customers by increasing launch prices. Two launch customers told us that in turn, they would pass on additional costs to their customers. Several also told us that they might increase the amount of their own third party liability insurance, another cost they might pass on to their customers. Two said they might be more likely to choose a foreign provider if the price of U.S. launches rose. According to launch companies and customers we spoke with, ending CSLA indemnification would also decrease the competitiveness of U.S. launch companies because launch customers would be exposed to more risk than if they used launch companies in countries with government indemnification. For example, officials from several launch companies and customers said that if some aspect of the launch payload is determined to have contributed to a launch failure, they could be exposed to claims for damages from third parties. Launch customers are currently protected from such claims through the CSLA indemnification program. Several launch customers with whom we spoke said that without CSLA indemnification they might be more likely to use a launch company in a country where the government provides third party indemnification. According to launch companies with whom we spoke, ending CSLA indemnification could also have other negative effects. For example, some said that the increased potential for significant financial loss for third party claims could cause launch companies, customers, or suppliers to reassess whether the benefits of staying in the launch business outweigh the risks. If some companies decided it was no longer worthwhile to be involved in the launch business, it could result in lost jobs and industrial capacity. Lastly, one industry participant pointed out that some suppliers, such as those that build propulsion systems, have to maintain significant amounts of manufacturing capacity whether they build one product or many. If there are fewer launches, the cost of maintaining that capacity will be spread among these fewer launches, resulting in a higher price for each launch. To the extent that the federal government is a customer that relies on private launch companies for its space launch needs, it too could face potentially higher launch costs. Although the number of commercial launches by U.S. companies has been lower in the past few years than in years prior, commercial space is a dynamic industry with newly developing space vehicles and missions. With the termination of the shuttle program, NASA has begun to procure cargo delivery to the ISS from private launch companies and intends to use private companies to carry astronauts to the ISS starting in 2017. In addition, private launch companies have been developing launch vehicles that will eventually carry passengers as part of an emerging space tourism industry. Both of these developments would increase the number and type of flights eligible for third party liability indemnification under CSLA. As the industry changes and grows, continually assessing federal liability indemnification policy to ensure that it protects both launch companies and the federal government will be important. This assessment would be impacted by the amount of coverage the insurance industry is willing to provide for space launches, which depends on a number of factors including the number of launches and reentries and insurers’ ability to evaluate the underlying risks. To the extent insurance capacity might increase, it could reduce the need for indemnification under CSLA. It is also possible, however, that certain events, such as a launch failure with large losses, could reduce insurance industry capacity for this type of coverage. Review of potential alternative means for addressing the risks associated with space launches, while beyond the scope of our work, would also be an important part of any ongoing assessment of CSLA indemnification. Several factors raise questions about FAA’s methodology for determining the maximum probable loss for a commercial space launch, which determines the amount of insurance coverage launch companies must buy and the amount above which government indemnification begins. During work for our July 2012 report, FAA said it believed its approach was conservative, but acknowledged that parts of the maximum probable loss methodology have not been updated, including a dollar amount for estimating space launch losses from casualties and fatalities which the insurance industry says is outdated. In addition, FAA used this estimate of losses from casualties and fatalities as the basis for estimating potential property damage, an approach that could underestimate property losses. Moreover, FAA had not had outside experts and risk modelers review its methodology. FAA officials told us that subsequent to our prior report they have taken some initial steps toward revising and updating their MPL methodology, but that budget constraints have prevented further progress in the short term. FAA officials have recently suggested that the Consolidated Appropriations Act of 2014 provides the resources to assess the MPL methodology, possibly as soon as March 2014. We agree with FAA that the benefits of developing and implementing a potentially more comprehensive maximum probable loss methodology need to be balanced against the possible increased costs to the agency and to launch companies. However, the importance of a sound calculation makes review of the current methodology a worthwhile effort. An inaccurate maximum probable loss value can increase the cost to launch companies by requiring them to purchase more coverage than is necessary, or result in greater exposure to potential cost for the federal government. Thus, we continue to believe that our July 2012 recommendation that FAA periodically review and update as appropriate its methodology for calculating launch providers’ insurance requirements has merit and should be fully implemented. Chairman Palazzo, Ranking Member Edwards, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staff have any questions about this testimony, please contact Alicia Puente Cackley at (202) 512-8678 or [email protected] or Gerald L. Dillingham, Ph.D. at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are, in addition to the contacts named above, Teresa Spisak and Patrick Ward (Assistant Directors), Chris Forys, David Hooper, Maureen Luna-Long, Sara Ann Moessbauer, and Steve Ruszczyk made key contributions to this report. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
A catastrophic commercial launch accident could result in injuries or property damage to the uninvolved public, or "third parties." In anticipation of such an event, a launch company must purchase a fixed amount of insurance for each launch, per calculation by FAA; the federal government is potentially liable for claims above that amount up to an additional $1.5 billion, adjusted for inflation, subject to congressional appropriations. As of 2013, the inflation-adjusted amount is about $3 billion. CSLA provides for this payment, called indemnification. This testimony is based on a July 2012 report and January 2014 updates to FAA launch data, FAA progress on implementing GAO recommendations, and insurance industry capacity. It discusses (1) the U.S. government's indemnification policy compared to policies of other countries, (2) the federal government's potential costs for indemnification, (3) the ability and willingness of the insurance market to provide additional coverage, and (4) the effects of ending indemnification on the competitiveness of U.S. launch companies. According to studies, the United States in 2012 provided less commercial space launch indemnification for third party losses than China, France, and Russia. These countries put no limit on the amount of government indemnification coverage which in the U.S. is limited by the Commercial Space Launch Act (CSLA). Governments' commitments to pay have never been tested because there has not been a third party claim that exceeded a private launch company's insurance. The potential cost to the federal government of indemnifying third party losses is currently unclear. This is because it depends in part on the method used by the Federal Aviation Administration (FAA) to calculate the amount of insurance that launch companies must purchase, which may not be sound. FAA had used the same method since 1988 and has not updated crucial components, such as the cost of a casualty. Estimating probable losses from a rare catastrophic event is difficult, and insurance industry officials and risk modeling experts said that FAA's method was outdated. An inaccurate calculation that understates the amount of insurance a launch provider must obtain would increase the likelihood of costs to the federal government; a calculation that overstates the amount of insurance needed would raise the cost of insurance for the launch provider. FAA officials said that their method was reasonable and conservative, but agreed that a review could be beneficial and that involving outside experts might be helpful. FAA officials said that subsequent to GAO's 2012 report they have taken initial steps to improve their methodology for estimating probable losses. The insurance market is generally willing and able to provide up to about $500 million per launch as coverage for third party liability, according to industry representatives. Because the amount of insurance FAA requires launch providers to obtain averages about $82 million per launch, and coverage available through CSLA is about $3 billion above that, insurers could provide some of the coverage currently available through CSLA. However, the amount and price of insurance that could be provided could change quickly if a large loss were to occur, according to insurance industry representatives. The effects on global competition from the United States eliminating CSLA indemnification are unknown. However, launch companies and customers GAO contacted believe that ending federal indemnification could lead to higher launch prices for U.S.-based launch companies, making them less price competitive than foreign launch companies. Although the cost of third party liability insurance for launch companies has been about 1 percent of the dollar amount of coverage they purchased, how much this cost might increase in the absence of federal coverage is not clear. Launch customers said that price and vehicle reliability were key factors in their choice of a launch company. Launch companies reported that additional costs would be passed along to customers, but whether this increase alone would be sufficient reason for a launch customer to choose a foreign company over a U.S. company is not clear. GAO continues to believe that its July 2012 recommendation that FAA periodically review and update as appropriate its methodology for calculating launch providers' insurance requirements has merit and should be fully implemented.
The Department of Education is responsible for overseeing state implementation of NCLBA, which amended and reauthorized the Elementary and Secondary Education Act. Title I of this act authorizes funds to states for local school districts with high concentrations of children from low-income families to improve the academic achievement of students failing or at risk of failing to meet state standards. Title I is the single largest federal program supporting education in kindergarten through 12th grade, supplying over $12 billion in federal funds in 2004. These funds are designed to supplement the instructional services and support that districts and schools provide. Title I and other federal funding represent about 8 percent of total spending on elementary and secondary public education, with the remaining 92 percent provided primarily by states and localities. Title I funds are distributed by formula to state education agencies, which retain a share but pass through most of the funds to school districts. Districts with at least a minimum number and percentage of low-income students receive a share of Title I funds. The districts are required to distribute Title I funds first to schools with high poverty rates—over 75 percent—and then to eligible schools in rank order of poverty either districtwide or within grade spans. Because school enrollment numbers and demographics may vary from one year to the next and because districts have some discretion in how many and which schools receive Title I funds, the status of schools as Title I or not Title I can vary from one year to the next. Approximately 25 million students were enrolled in schools eligible for Title I funds in school year 2002-2003 out of a total of about 49 million students in all schools nationwide, according to Education. Stronger accountability for educational results is one of several education reform principles embodied in NCLBA and it builds on requirements in place under prior law. Prior to NCLBA, states were expected to have accountability systems that included standards for what students should learn and tests every year in certain grade levels to measure their knowledge of reading and mathematics. Each year, increasing percentages of students were expected to demonstrate their proficiency on these tests, and schools were judged on their ability to make adequate yearly progress in educating students to the state’s standards—referred to in this report as meeting their yearly performance goals. Title I schools that did not meet their goals for two consecutive years were to be designated for improvement, provided technical assistance, and required to implement improvement plans. States were at various stages of implementation when NCLBA was enacted, so some states had been identifying schools for improvement for several years while others were just beginning the process. Enactment of NCLBA strengthened accountability requirements by specifying timetables for school improvement and by holding all public schools, whether or not they receive Title I funds, accountable for the academic performance of various subgroups of students. For example, schools must reach yearly performance goals set by states that will result in 100 percent of students meeting state proficiency standards by school year 2013-2014. In addition to meeting the state’s performance goals in general, schools are responsible for meeting those goals for specified subgroups of students who (1) are economically disadvantaged, (2) represent major racial and ethnic minorities, (3) have disabilities, or (4) are limited in English proficiency. If any subgroup does not meet the target, the school is identified as not having made its yearly performance goal. While NCLBA requires that all 94,000 public schools in the nation be held accountable for their performance, it requires specific actions or corrective interventions only for Title I schools that repeatedly miss their yearly performance goals. Two kinds of immediate interventions are required for Title I schools that have not met their performance goals for two consecutive years. On the one hand, plans are set in motion to improve the school’s performance. At the same time, students must be given the opportunity to transfer to other schools under the school choice option. Depending on how often schools continue to miss their goals, other required actions range from offering students supplemental educational services, such as after-school tutoring, to completely restructuring schools. See appendix II for further details on the specific actions required in each year. The first year that Title I schools do not meet performance goals, no specific actions are required under NCLBA. However, if the goal is missed the next year, districts generally must offer parents of students attending these schools the choice to transfer their child to another school. The district must provide transportation to the new school, within limits, and continue to pay for transportation until the school from which the student transferred is no longer identified for choice. Schools are no longer identified for choice when they have met their yearly performance goals for at least 2 consecutive years. Districts are required by federal regulations to offer parents at least two schools from which to choose, if available, and these schools may be any public school that is not itself currently identified for choice. Thus, under NCLBA offered schools could include Title I schools that have missed their yearly performance goals for a single year or any school that does not receive Title I funds, regardless of its performance. However, states could further limit the schools offered as transfer options, for example, by prohibiting transfers to non-Title I schools that have not met their yearly performance goals. Under circumstances where no viable transfer options exist—as in districts with only one school serving particular grade levels or where all schools in the district have repeatedly missed their performance goals—districts are required, to the extent practicable, to make arrangements with other districts to accept their transfer students. NCLBA requires that districts notify parents of the choice option by the first day of the school year immediately following the test administration that resulted in the school being identified for choice. For example, if tests given in spring 2003 resulted in the school being identified for choice, then the option had to be offered parents by the first day of school of the 2003- 2004 school year. Notices to parents must be in an understandable and uniform format and, to the extent practicable, in a language that parents can understand. These notices must explain why the school was identified for choice and how it compares with others in the district and state. In addition, federal regulations require that the notice include information on the academic achievement of the schools offered as transfer options. Districts are not required to give parents their first choice among the transfer options provided, but may not deny transfer requests based on lack of physical capacity, such as lack of space within a building or classroom, according to federal regulations. When deciding which schools to offer as transfer options, districts can consider the amount of available capacity, but they must offer options for all students enrolled in schools identified for choice. When reviewing transfer applications, making school assignments, and arranging for transportation, districts are required to give priority to the lowest-achieving low-income students. In each of the first 2 school years following enactment of NCLBA, from 10 to 12 percent of schools that received federal funds under Title I were identified for school choice. Several million students were enrolled in the schools identified for choice and were thus eligible to transfer. About 31,000 students, representing 1 percent of those eligible, transferred in the second year, school year 2003-2004. Although Education has recently begun to collect information on the number of transferring students, little is known about their demographic or academic characteristics. Our analysis of data from one district showed that proportionately fewer minority and low-income students transferred, compared with students in the same schools who did not transfer. In each of the first 2 school years of NCLBA, about 1 in 10 Title I schools— about 1 in 20 public schools nationwide—was identified for school choice. About 5,300 schools attended by 3 million children, were identified for choice in the first year of NCLBA. As shown in figure 1, the total number of schools identified for choice increased to about 6,200 in year two. Because schools must meet their performance goals for 2 consecutive years before they are no longer identified for choice, many of the same schools may have been included in the total number for both the first and second years. As figure 2 shows, Title I schools identified for choice enrolled larger proportions of minority students and students from low-income families than other Title I schools. About 60 percent of all schools identified for choice were elementary schools. However, this proportion is smaller than might be expected, given that 71 percent of all Title I schools are elementary schools. As figure 3 shows, proportionately more middle and high schools were identified for choice. Most schools identified for choice in school year 2003-2004 were located in urban and suburban areas. Although 15 percent of all Title I schools were located in rural areas, only 11 percent of the Title I schools identified for choice were in rural areas. Figure 4 shows state variation in the proportion of schools identified for choice. In the majority of states, 10 percent or fewer of Title I schools were identified for choice in 2003-2004, but in some states a much larger percentage was identified. One state, Wyoming, had no schools identified for choice in 2003-2004. Among the states with relatively few schools identified for choice were some of the nation’s most rural states, including Maine, Mississippi, and Nebraska, but also some more populous states such as Florida and Texas. By contrast, in 22 states the percentage of Title I schools required to offer choice ranged from 11 percent to 48 percent. Among these were several of the nation’s most populous states, including California, Illinois, and New York, but also one of the most rural—Alaska. Georgia and Hawaii each had 40 percent or more of their schools identified for choice, higher than any other state. See appendixes III and IV for state details for each year. A number of factors contribute to state variations in the proportion of schools identified for choice, including differences in school populations and state accountability systems. Under NCLBA, if a school contains a minimum number of students in specific groups—low-income, major racial and ethnic minorities, students with disabilities, and limited English proficient—schools are held accountable for the academic outcomes of those groups, in addition to academic outcomes of the entire school. Large or diverse schools are likely to have more student groups containing the state-defined minimum number of students, and consequently have more performance targets. Because it is harder for schools with many targets to meet their overall performance goals, states with larger or more diverse schools may be more likely to have a higher percentage of schools miss their targets and be identified for choice. Characteristics of states’ accountability systems also contribute to the variation among states. For example, states use different standards and set different annual progress rates for reaching 100 percent proficiency. In addition, some states use smaller minimum student group sizes than other states. The smaller the size of the group used, the more likely a school will include additional student groups in accountability, increasing the number of performance targets the school must meet. About 19,000 students transferred under the NCLBA school choice option in school year 2002-2003, the first year, and an additional 31,000 students transferred in the second year. As illustrated in figure 5, this number transferring in the second year represented about 1 percent of the students who were eligible. Across states, the number of eligible students who transferred under NCLBA in 2003-2004 ranged from zero in 6 states to over 7,000 in one state. States also varied in the extent to which eligible students exercised the option and transferred. Oregon reported the highest proportion of eligible students transferring at 17 percent, followed by Florida with 6 percent. The remaining states had less than 5 percent of eligible students transfer. Further, states with more students eligible for choice under NCLBA did not necessarily have more students use the transfer option. For example, although Hawaii had more students eligible for choice, Colorado had about twice as many students transfer. The number of eligible students transferring in each year for each state is detailed in appendix V. Overall, the proportion of eligible students transferring in the most rural states was about the same as in other states; however, statewide data may mask differences within states between rural and nonrural districts. For example, Kansas, the rural state with the most students eligible for choice, provided detailed data showing how many student transfers occurred in each district. About 70 percent of Kansas transfers were in the state’s three largest districts—Wichita, Shawnee Mission, and Kansas City —although about half of the students eligible for NCLBA transfers were located in those districts. Officials in several rural states reported that rural districts faced unique challenges implementing NCLBA choice. In some rural districts, although students were eligible for choice, no transfers took place because there were no other schools in the districts that could be offered as transfer options. Where transfer options were available, sometimes the distances between schools made transfers difficult. In the 41 states that could provide student transfer data and had schools identified for choice in both years, the total number of transferring students rose by about 85 percent. This increase was driven by several states that had substantial increases, such as New York, New Jersey, and South Carolina. However, 8 states reported declining numbers of transfers, and 6 of these states also reported fewer schools identified for choice in the second year, while 2 reported increases. Little is known about the demographic characteristics and academic performance of students who transferred under NCLBA school choice in either year or reasons why parents accept or do not accept transfer opportunities. Although Education has requested state data on the number of students transferring each year, it has not collected data on the characteristics or academic performance of transferring students. Education officials told us that they have contracted for a major, multi- faceted study of NCLBA that will examine key areas of implementation, including school choice. Two parts of the study relating to school choice are descriptive: one is a descriptive comparison of the demographics of students who choose to transfer and those who do not. A second part examines reasons that parents give for their decisions about whether or not to apply for transfers. A third part of the study, still under design, will examine student achievement outcomes. This effort would examine the academic outcomes over time of transferring students in a sample of districts, but this portion of the study is not fully developed. For instance, officials said they are still exploring several possibilities for study methodology and whether demographic characteristics of these students will be included in the achievement analysis. Our analyses of 2003-2004 demographic and academic data that we were able to obtain from one district we visited showed diversity in transferring students. Of the students who transferred, 53 percent were male, 62 percent were minorities representing all the major racial and ethnic groups, and 82 percent were from low-income families as measured by their eligibility for the free or reduced-price school lunch program. In addition, 10 percent of these transferring students were English language learners and 14 percent were enrolled in special education. In general, proportionately fewer minority and low-income students transferred, compared with students who were eligible but did not transfer, as shown in table 1. Our analysis of available student performance data from state reading and math assessments showed little difference between transferring students and those not transferring. The proportion of students who met the standards was about the same for each group. Compared with students in the schools into which they transferred, however, transferring students were somewhat lower performing on state assessments. About 33 percent of transferring students met state reading standards, while 43 percent of the other students in the receiving schools met these standards. Similarly, about 20 percent of transferring students met state math standards, while 34 percent of the other students in receiving schools met state math standards. Transfer students were also more often from a minority background. About 62 percent of the transferring students were minorities, but about 52 percent of the students in receiving schools were minorities. Officials in most of the 8 districts we visited mentioned that they supported the NCLBA focus on improved student performance and accountability; however, they had difficulties providing school choice, primarily because of tight timeframes and insufficient capacity. To try to get notices out to parents before school started, most districts took a risk and acted on preliminary data on school performance that they received from the state in late summer because final data were not available. Parents of eligible students were presented at least two schools as transfer options, but many of these alternatives were similar to the schools students were currently attending. Some districts were not able to accommodate all transfer requests because the demand for some schools exceeded their capacity. Districts employed a variety of strategies to provide transportation to transferring students, including school buses, public transportation, and cash stipends. Although the law requires districts to notify parents of the choice option by the start of the school year, 7 of 8 districts we visited did not receive final results of school performance for the most recent year from the state in time to meet the requirement. Consequently, many used preliminary data to identify schools for choice. Akron was the only district that had final results from the state when notices were sent to parents. Four districts used preliminary data to identify which schools had to offer choice and notified parents before school started. A fifth district also used preliminary data but did not receive the data until after school started. Using preliminary data can put districts at risk of incorrectly identifying schools as having to offer choice and consequently misinforming parents. One district included language in the notification letter to parents explaining that the transfer offer could be revoked if final determinations by the state were different. Table 2 shows key testing and notification dates in 6 of the districts we visited. The remaining 2 districts we visited, Memphis and Fresno, did not use preliminary data from the most recent testing period, but rather used data from the previous year to determine the schools that would have to offer choice. Officials said they were aware this delay was not in accord with Education guidance but took this action to combine NCLBA choice with their voluntary choice programs, which permit all students to request transfers in the spring. Memphis officials said that they planned to change their procedures and offer school choice twice in 2004—first in the spring, during the open enrollment process, for schools that they already know must offer choice and again in the fall when they receive the results of the spring 2004 assessments. Fresno officials did not indicate that they would be changing their procedures. Given the tasks that districts must complete to offer school choice before school starts, officials expressed concerns that little could be done to mitigate these timeframe problems. Districts must first administer state tests in the spring, which are sometimes sent to contractors to be scored. Next, after receiving the preliminary test results from the state, districts assess the scores to verify the accuracy of the data, use these data to identify schools likely to be required to offer choice, and notify schools. Schools may appeal this decision to the state. Only after reviewing such appeals do states release final determinations of which schools are required to offer choice. Most districts we visited did not have the final performance data before school started in the fall. Figure 6 shows the timeline of events in one school district we visited; similar patterns occurred in most others. The compressed timeframe for making school status determinations and implementing the choice option left parents little time to make transfer decisions, and district and school officials expressed concerns that parents did not have adequate time to make an informed decision. In most of the districts we visited, parents had 3 weeks or less to make their transfer decisions. In addition, in districts and schools with highly mobile populations, reaching parents can be time-consuming. Akron and Memphis officials told us that many letters notifying parents of the transfer option were sent to addresses found to be incorrect. To ensure that parents had a greater chance of learning about school choice, some districts used a variety of additional notification strategies—fliers, newspaper articles, postings to the district Web site, and public meetings. In addition, they provided parents several ways to communicate their desires such as through the mail, by telephone, or going to the district in person. Officials in some districts also expressed concern that the information provided to parents was not always clear and that it may not have been sufficient for them to base their decisions. Letters sent to parents generally explained what it meant to be identified for choice, gave the reasons for the identification, described the process for applying for transfer, and listed the transfer school options. However, little information was provided about the transfer schools. In some districts, school officials were concerned that the wording of the letters may have been confusing. They said that parents did not always understand the meaning of the school choice option as explained in the letter and needed more time to consult with district or school officials. For example, officials in 2 districts told us that some parents misunderstood the letter and believed that they were required to transfer their child to another school. Other school officials talked about the need for parents to have additional information about specialized services and instructional support that certain schools provide in order to understand the educational implications of their decisions. Officials in one district told us that some parents who chose a transfer school later changed their minds when they found that student support services their child had received at their Title I school, such as extended day programs and after school tutoring made possible by Title I funds, were not available at the non-Title I transfer school. Schools also faced challenges in implementing choice within the timeframes, particularly in adjusting staffing and scheduling, when they learned shortly before the start of school that they would be receiving students under the NCLBA school choice program. For example, a Tacoma middle school principal said that she faced a variety of challenges when she was notified a month before school started that the school was to receive NCLBA transfer students. Based on spring predictions of the school’s student population and student needs, she had released six teachers. However, when notified the school was receiving 57 NCLBA transfers in the fall, she had to quickly hire two new teachers and reconfigure the schedule to include more remedial classes to accommodate the learning needs of the transferring students. In addition, school officials did not receive records for some students from the schools they left until after school started, and some students were initially placed in the wrong classes. Whenever possible, districts offered each parent at least 2 schools as transfer options, as required by federal regulations, but some districts offered more than 30 schools. The locations varied by district. Table 3 shows the number and location of elementary schools offered in the districts we visited. Some districts offered schools based on geographic location within the district and some offered schools districtwide. For example, as table 3 shows, students in Memphis attending one of the 40 schools identified for choice selected from among 3-10 transfer schools that were in the same general area of the city, while students in each of the 6 schools identified in Akron selected from a group of 33 schools spread across the district. Elementary schools offered as transfer options were more commonly selected for their proximity to sending schools than middle and high schools, which were generally offered districtwide. Although not shown in table 3, parents generally were offered fewer transfer options for middle and high school students, because districts tend to have fewer middle and high schools than elementary schools. Many schools that districts offered as transfer options had not met state performance goals in the prior year, and some were at risk themselves of having to offer choice in the following year. Among the seven districts that offered transfers, all had some schools offered as choices that had not met the state’s yearly performance goals, based on the spring 2003 assessments. Table 4 provides more detail on the status of schools offered as transfer options by district. Because many of these schools were Title I schools and, therefore, subject to NCLBA requirements, those that did not meet their yearly performance goals for a second consecutive year would have to offer school choice the following year. For example, in Memphis 37 Title 1 schools were offered as transfer options, and 29 of these had not met yearly performance goals based on spring 2003 tests. Some schools offered were not Title I schools and, therefore, were not required to offer transfers, regardless of the performance of the school. Overall, as shown in Table 4 for the districts we visited, from 21 to 73 percent of all schools offered, Title I and non-Title I, had met yearly performance goals. Officials from large urban districts such as Fresno and Memphis said that they would have few schools to offer as choices if they did not offer Title I schools that had failed to meet the performance goals for only one year. Officials in some districts expressed concerns that, as the bar for meeting yearly performance goals is raised, more schools would fail and few schools would be available as transfer options. In these districts, over 80 percent of schools received Title I funds and many more students could become eligible for transfer under NCLBA. In districts such as Chicago, Fresno, and Memphis with high proportions of Title I schools, the majority of the schools offered as transfer options were often demographically similar to those attended by students eligible for transfer. Specifically, the schools offered as transfer options served many poor students and had high minority populations. As shown in table 5, for example, 34 of Fresno’s 39 schools required to offer choice—about seven-eighths—had poverty rates that exceeded 75 percent, as did over half of the 18 schools offered as transfer options. In contrast, other districts that we visited tended to offer more transfer options that differed demographically from the schools required to offer choice. As shown in table 6, for example, 7 of Akron’s 8 schools required to offer school choice had poverty rates that exceeded 75 percent, but less than one-third of schools offered as transfer options had such rates. See appendix VI for poverty and minority rates of schools in seven districts that we visited. Despite the fact that all districts offered parents a choice of schools, officials in four districts told us that they were unable to accommodate some requests for transfers because of constraints on classroom capacity, as shown table 7. In two districts in Illinois—Elgin, and Chicago—officials said that they believed that state law did not allow their districts to offer choice under NCLBA if it led to overcrowding in schools. Akron officials told us that they were seeking clarification from the state about whether any transfers in their district would be prohibited by Ohio state law. Memphis officials told us that demand exceeded the capacity at certain schools that were already overcrowded, and use of portables to expand capacity was unrealistic because of the expense and lack of sufficient space on school campuses. In some districts with capacity constraints, open enrollment programs could limit the ability of students to transfer under NCLBA. In all but 2 of the districts we visited, school choice was available to all students through open enrollment programs. These programs offered students the chance to apply for transfers, typically during the winter and spring months, and, in several districts, allowed transferring students to learn which school they would attend before the end of that school year. In contrast, students in most districts transferring under NCLBA did not know about their opportunity until just before the next school year started. Unless these districts took special care, schools could be filled to capacity with transfers approved under the open enrollment program before NCLBA students had the opportunity to apply. To avoid this situation, Akron gave NCLBA transfers priority and delayed decisions on requests for transfers under its open enrollment program until the decisions on NCLBA transfers had been made. Officials in Fresno, Pittsburgh, and Tacoma reported that they had not yet experienced problems with capacity because few students had transferred. However, some officials expressed concern that capacity could pose a challenge in their district in the future. Specifically, officials in Fresno, Memphis, and Tacoma noted that if more schools were required to offer choice in the future, the number of students eligible to transfer could increase and capacity could become a problem. The districts we visited arranged and paid for the transportation of students who transferred, as required under NCLBA, but did so in a variety of ways as allowed under the law. For instance, some districts provided school buses, while others paid for public transportation or provided cash stipends to cover public or private transportation. In 5 of the 7 districts, school buses picked up elementary students who lived more than 1-2 miles from their schools. For middle and high school students, some districts paid for public transportation by giving students passes or tokens. Finally, Akron gave parents a $170 transportation subsidy at the end of the school year in which students transferred to subsidize the costs of public transit or defray the gasoline costs of driving their child to school. In providing transportation, districts used relatively little of the funding that was required to be set aside for school choice transportation and for supplemental services because few students transferred. In 2003-2004, the estimated expenditures for transportation represented less than 7 percent of the set-aside funds in all but one district we visited. As shown in table 8, the proportions ranged from less than 1 percent in Akron to about 25 percent in Elgin. Most district officials did not expect to spend the full amount that had to be set-aside for the combined costs of choice-related transportation and supplemental educational services. However, some district officials said they anticipated that as more schools have to offer school choice and more students become eligible to transfer, it is likely that transportation expenditures will increase. Education issued final regulations and guidance on school choice within a year of NCLBA enactment, but did so after districts had begun their first year of implementation, and some issues remain unclear. Extensive additional guidance and technical assistance in the form of policy letters, training tools, presentations at conferences, and a handbook on promising practices became available at various times throughout the first and second years of implementation. While district officials we visited generally had access to Education’s guidance, questions concerning the implementation of school choice remained, as might be expected in initial years of implementation. For example, there were “how to” questions about ways to offer choice when building capacity is limited. There were also “what if” questions involving issues that may arise as NCLBA implementation progresses, such as districts’ use of Title I funds for transportation when students choose to remain at a school to which they have transferred if that school subsequently does not meet its yearly performance goals and is itself required to offer choice. Some of these questions have been addressed in guidance but others remain. Responding to the need to get information out quickly, Education issued preliminary guidance in June 2002 before the start of the first school year. The information provided, however, was not always clear or complete. The preliminary guidance was sent out in the form of a “Dear Colleague” letter directly to school district superintendents as well as state education agency officials. In the letter, Education acknowledged that its preliminary guidance was necessarily brief and not as comprehensive as guidance that would be forthcoming. The letter highlighted key topics such as notices to parents, designation of sending and receiving schools, prioritization of students, capacity and transportation. The letter stated that choice had to be provided, unless prohibited by state law, to all eligible students, “subject to health and safety code requirements.” Some district officials believed this language allowed them to limit the number of transfers based on state or local health and safety codes or classroom size requirements. Subsequent guidance provided additional information about Education’s position on capacity and other issues. Final regulations and draft guidance on choice were issued in December 2002, after the start of the first school year. The final regulations applied to all aspects of Title I, while the draft guidance applied specifically to school choice and was characterized as “non-regulatory” guidance. The final regulations clarified some key information and the December guidance added extensively to material in the June 2002 letter. For example, in response to numerous requests for clarification of its language on capacity, Education’s regulations made it explicit that districts were required to accommodate all transfer requests while complying with all applicable state and local health and safety codes as well as classroom size requirements. Districts had to offer all students at schools identified for choice the option of transferring and could not use lack of capacity as a reason to deny students this option. The regulations explained that state law exempts districts from offering choice only if the state law prohibits choice through restrictions on public school assignments or the transfer of students from one public school to another public school. The December guidance went further to help clarify Education’s position by contrasting its regulations on capacity before and after enactment of NCLBA and providing an explanation for the differences. Because there had been no mention of capacity in NCLBA and some district representatives were uncertain about the meaning of the preliminary guidance in the June letter, the final regulations and December guidance represented an important clarification of Education’s official position on the issue. In these December documents, Education also suggested ways that districts might expand capacity, for example, by adding classes and additional teachers, in order to be able to offer choice to students while adhering to state classroom size requirements and health and safety codes. In the second year of NCLBA implementation, Education updated and expanded its draft guidance and published a handbook on promising practices in the provision of school choice. Education also provided additional assistance in the form of training materials, presentations at various conferences and a toll-free hotline for district superintendents in both the first and second years. See table 9 for a chronology of the various types of guidance on choice issued by Education. The February 2004 guidance was developed in response to state questions, often made at the request of districts, for further clarification of several issues. Although Education’s primary relationship was with state agencies, Education officials also made appearances at conferences attended by district and school officials and made a concerted effort to alert local education officials and other interested parties when it released its latest guidance through electronic mailing lists to subscribers and through its Web site. In many of the districts that we visited, officials told us that they had access to Education’s guidance, either directly from Education’s Web site, from the state agency or from a national organization representing their interests, such as the Council of the Great City Schools. One of the major changes in the February 2004 guidance was a list of 10 ways that districts might increase capacity in order to provide school choice for all eligible students requesting transfers. The guidance suggested that districts “employ creativity and ingenuity” in developing ways to expand capacity, such as setting up “virtual” schools, reallocating portable classrooms, or creating “schools within schools” that would be new, distinct schools, with separate faculty, within the physical sites of schools required to provide choice. Some district officials we interviewed expressed their reservations about the feasibility of Education’s suggestions on how to develop the needed capacity, in part because of concerns about the costs of implementing the suggestions. The May 2004 technical assistance handbook describes promising practices in several key areas, including capacity, that have been employed to implement choice in 5 school districts. The most detailed and thorough description covers strategies that these districts have used to deal with parental notification and decision-making, but other chapters deal with capacity and transportation, support for sending and receiving schools, use of databases and surveys for planning, and factors leading to success. With respect to capacity, the handbook lists the actions that certain districts have taken but does not describe them in detail. For example, the handbook states that Milwaukee established a special team that spent 2 months assessing available capacity; Miami-Dade used portables; Denver used teacher lounges and resource rooms as classrooms, and Mesa created new schools. Individuals interested in more details could contact the districts involved to find out more about timetables and costs of these various strategies at the postal or Web site addresses listed in the handbook. Education officials told us that within the first 6 months following its publication they had sent out over 16,000 copies of handbook to state officials and to organizations representing local education officials, such as the National School Boards Association and the National Alliance of Black School Educators. Numerous questions concerning current or future implementation issues were raised during our visits with district and school officials that were not answered clearly in Education’s February 2004 guidance on choice. The issues involved how best to handle, within the context of federal regulations and guidance, certain complex situations involving timetables, schools receiving transfers, transportation, and capacity. With respect to timetables for parental notification, district officials we visited in two states were concerned about the accuracy of preliminary state determinations of the schools that made or did not make yearly performance goals. Because NCLBA required that they offer choice by the start of school, the districts were acting on preliminary but possibly inaccurate determinations made by states and were uncertain if there were any circumstances that would permit them to delay choice until they received final determinations. Basically, the questions involved how best to mitigate the risks for all involved—districts, schools, parents and students. Notices sent to parents had included warnings—either that the school status might change within a month or that the offer of choice might be withdrawn. However, there was interest in finding better ways to deal with the uncertainty involved, including what steps should be taken on behalf of parents and students when transfers have occurred either into or out of schools that were designated incorrectly. Even where school designations were known, planning for future contingencies raised a number of questions about schools offered as transfer options and about transportation arrangements. District officials explained that they are operating in a dynamic environment where school performance can change from one year to the next and their status as Title I or not Title I schools can also change. Officials in one district we visited asked for confirmation that, if they could not find reasonable alternatives, they would be permitted to offer as transfer options those schools that had missed their performance goals for one or more years, as long as they were not Title I schools. Confirmation could be inferred from the February 2004 non-regulatory guidance on choice, but was not clear-cut. Considerations of schools offered as transfer options led into further questions about transportation provided for students who transferred, for example if students could continue to receive Title I-funded transportation if they had transferred into a Title I school offered as a transfer option that later missed its yearly performance goals for 2 consecutive years. In several districts we visited, we found that officials were struggling to find practical and realistic ways to offer choice when building capacity, budgets and timeframes were limited. Some of these officials had studied the suggestions offered in Education’s February guidance but considered creation of virtual or charter schools to be long-term projects that could not provide capacity in time to meet short deadlines. Other officials commented that they did not know what steps to take to create “schools within schools,” as suggested, or how to estimate the costs. Cost considerations were a major issue in several districts where capacity constraints had limited the number of transfers under NCLBA. Education officials told us in November 2004 that they believed that the guidance and technical assistance that they had provided thus far was sufficient to meet the needs identified by states and district officials with whom they were in contact. At that time, they had no specific plans to issue further guidance or provide additional technical assistance on these issues. However officials added that policy letters will continue to be issued as needed in response to questions raised by states that have not been addressed elsewhere. Finally, some issues that district officials raised during our site visits were not ones for which Education could provide guidance. These issues involved distinctions between federal and state requirements that individual states would be expected to resolve for their districts. For example officials in one district sought clarification as to whether or not their state applied the NCLBA interventions both to schools not receiving Title I funds and to Title I schools. Officials in another district were unsure about whether their state exempted schools that did not receive Title I from some NCLBA interventions, such as school choice and supplemental services, but not from other interventions, such as corrective action and restructuring after repeatedly missing yearly performance goals. NCLBA is an important and complex piece of legislation, and as implementation proceeds, Education will need to continue to help states and districts address the many issues they face in providing school choice. State and districts officials, although positive about the intent of NCLBA, nevertheless identified a variety of challenges in implementing the law. Half of the districts we visited did not grant as many transfers as were requested because of constraints on the building capacities at many of their schools. Difficulties related to building capacity are unlikely to diminish in the future, and could become more pronounced if the number of students eligible to transfer increases and the number of schools available as potential transfer options decreases. In the first 2 years under NCLBA, Education data show that the number of schools not making their yearly performance goals increased. Several state officials suggest that this trend will continue. Consequently, it is likely that more schools will be identified for choice, which would increase the number of students eligible for transfer while decreasing the pool of possible transfer schools. Further, new challenges may arise if the schools to which students have transferred in the early years of NCLBA do not themselves make their yearly performance goals. In addition, our work raised questions about how well-informed parents are about the school choice option. In the second year of NCLBA, about 1 percent of eligible students transferred, and without more information on the reasons parents do or do not take advantage of the transfer option, policy makers and school officials may miss opportunities to better serve parents and students through the choice option. In addition, it is unclear whether or not parents are receiving adequate information to make fully informed transfer decisions. It may be that parents do not fully understand why their child’s school was identified for choice or the educational services available in the transfer school. Education’s longitudinal study of NCLBA will address some of these questions. The parental survey will explore the reasons parents do or do not exercise the transfer option and the circumstances that facilitate or hinder their decisions. The result of the survey may give Education and policymakers insight into the reasons behind the numbers of students who have transferred, as well as assist school officials in assuring that parents are aware of the option. In addition, the technical assistance handbook that Education issued in May 2004 provides some suggestions that may help districts improve their communications with parents, but this information is based on the experiences of only a small number of school districts. Districts may need additional help in various ways, including how to provide information on choice options that can be easily understood by parents and how to provide additional information parents need to make an informed decision. Finally, little is known about transferring students or the effects of transfers, but Education’s plans for its major study of NCLBA are promising. As planned, the study should provide insight into the demographic characteristics of students transferring under the school choice provision and the extent to which the lowest achieving students from low-income families, identified for priority consideration under the law, are exercising the transfer option. Equally important is Education’s proposed analysis of how transfers may affect the subsequent academic performance of students who change schools under the choice provision of NCLBA. This portion of Education’s proposed study is critical to informing policy makers and school officials about whether or not the school choice option is achieving its intended outcome of improving student achievement; however, this part of the study is still in the design phase. To help states and districts implement choice and to gain a better understanding of its impact, we recommend that the Secretary of Education: Monitor issues related to limited classroom capacity that may arise as implementation proceeds, in particular, the extent to which capacity constraints hinder or prevent transfers. Based on this monitoring, Education should consider whether or not additional flexibility or guidance addressing capacity might be warranted. Collect and disseminate additional examples of successful strategies that districts employ to address capacity limitations and information on the costs of these strategies. Assist states in developing strategies for better informing parents about the school choice option by collecting and disseminating promising practices identified in the course of working with states and districts. For instance, Education might collect and share examples of clear, well- written, and particularly informative notices. In addition, Education should make the results of its parental surveys, conducted as part of its national study, widely available for use by states and districts to help them better refine their communications with parents regarding school choice. For its student outcomes study, Education should use the methodology with the greatest potential to identify the effects of the school choice transfer on students’ academic achievement. The methodology selected should allow it to compare academic outcomes for transferring students over several years with outcomes for similar students not transferring, while accounting for differences in student demographics. The study should also examine the extent to which transferring students remain in the schools to which they transfer. We provided a draft of this report to the Department of Education for review and comment. Education’s written comments appear in appendix VII. Recommended technical changes have been incorporated in the report as appropriate. Education said that the report would be a useful addition to the literature on the public school choice provision and indicated its intent to use the findings and recommendations in the report to improve Education’s technical assistance to states and districts and to strengthen its implementation studies. Specifically, Education agreed with our recommendations concerning monitoring capacity and disseminating successful strategies to meet capacity challenges, noting several projects under development that might assist in carrying out these recommendations. Education also strongly supported our recommendation that it assist states in better informing parents about the school choice option and related some of its plans for doing so. Regarding our recommendation concerning the department’s study of choice implementation, Education said that it is working to design a rigorous analysis of student outcomes and will take our recommendation into consideration as it refines the design for the study. We are sending copies of this report to appropriate congressional committees, the Secretary of Education, and other interested parties. Copies will be made available to other interested parties upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any question about this report, please call me at (202) 512-7215. Key contributors are listed in appendix VIII. The objectives of this report were to determine (1) the extent to which Title I schools have been affected by the school choice provision of The No Child Left Behind Act (NCLBA) of 2001 in terms of the number of schools identified for choice and the number of students exercising the option; (2) the experiences of selected school districts in implementing the choice provision; and (3) the kinds of guidance and technical assistance that the Department of Education provided states and districts as they implemented public school choice. To determine the extent that schools have been affected by school choice in terms of the number of schools required to offer choice and the number of students exercising the option, we analyzed data for school years 2002- 2003 and 2003-2004 using two sources: our survey of state education agencies and state reports to Education. To obtain data on the number of schools that had to offer choice, we used a different source for each school year. For 2002-2003, we surveyed state education agencies in all 50 states, the District of Columbia, and Puerto Rico; for 2003-2004, we obtained data from Education that had been reported by each state in its Consolidated State Performance Report: Part I. Information on the number of students that chose to transfer to another school for school year 2002-2003 was obtained from the Consolidated State Performance Report: Part I; for 2003-2004, the data were obtained from our survey of the state education agencies, the District of Columbia, and Puerto Rico. Although there was a 100 percent response rate to our survey and to Education’s report, not all states provided complete information. Seven states did not provide any transfer information for 2003-2004 because they did not plan to collect this information until later in school year 2004-2005. To test the reliability of these data, we performed a series of tests, which included checking that data were consistent, that subtotals added to totals and that data provided for one year bore a reasonable relationship to the next year’s data and to data reported elsewhere, including state education websites. Where we found discrepancies or sought clarification, we followed up with state officials. In several states, officials revised the numbers that they had initially reported to us or to Education. We determined these data to be sufficiently reliable for our purposes. In addition, we sought information on schools and students from several sources. The grade span and location of schools (urban or rural) identified for choice and the demographics of their students was available from the National Center for Education Statistics (NCES). We were not able to describe the characteristics of the schools required to offer choice in 2002- 2003 because the list of schools was not available. We analyzed data for the nation as a whole and by state, expressing the results in relation to the universe of all Title I schools or all public K-12 schools. When we compared results in the first school year with results in the second, we compared only states that provided information for both years and eliminated any states that provided data only for a single year. Because NCES data were not available for 2003-2004, the year for which we obtained lists of schools identified for choice, we used as a proxy the 2002- 2003 enrollment data for these schools including student numbers, minority status and eligibility for the free or reduced price school lunch program as a measure of the family income. Because these were the only data available, and because we considered them adequate for our purposes, we used 2002-2003 enrollment data to characterize schools in 2003-2004, based on an assumption that at the aggregate levels the numbers and characteristics did not differ significantly from one year to the next. We discussed this assumption with education officials at NCES, and for a sample of states, tested it by checking the changes from 2001- 2002 to the following year for schools identified. We also tested the reliability of the NCES data by comparing our numbers to published totals and by reviewing documentation. We considered these data to be sufficiently reliable for our purposes. To determine the experiences of selected school districts’ implementation of NCLBA school choice, we visited eight districts that had schools required to offer choice. On the basis of our discussions with state officials and our own research, we selected districts located in seven states— California, Illinois, Ohio, Mississippi, Pennsylvania, Tennessee, and Washington. Districts selected were based on geographic location and district profile in terms of the number of schools required to offer school choice, student population, and demographic profile. (See table 10 for district characteristics.) During our visits, we interviewed officials in school district offices and in most districts, also interviewed principals of schools that were required to offer school choice as well as principals of schools that received transferring students. In each of these districts, we attempted to obtain data on the characteristics of students—such as race, poverty, and academic achievement—that had transferred to another school under NCLBA school choice in school year 2003-2004; we had limited success at obtaining such information from most schools. We were able to obtain information on transferring students’ academic achievement from one district but most districts had not collected this information. To determine the kinds of guidance and technical assistance that Education provided states and districts as they implemented NCLBA public school choice, we reviewed regulations, policy letters, and non- regulatory guidance provided to states and districts. We also interviewed Education officials involved with developing the guidance and providing assistance to states in implementing school choice. To obtain the perspective of officials using the guidance provided by Education, we interviewed district officials at all eight sites and state agency officials in 2 states. In addition, to obtain a national perspective on the effectiveness of Education’s guidance and assistance to the states and districts, we interviewed officials at the Council of the Great City Schools, the Council of Chief State School Officers, and the Center on Education Policy. For Chicago, minority data were not available for one of the 40 transfer schools. Two of the 47 transfer schools in Memphis were new in school year 2003-2004, and no data were available on the poverty or minority rates for the student enrollment at these two schools. One of the 8 transfer schools in Tacoma was new in school year 2003-2004, and no data were available on the poverty rate for the student enrollment at this one school. In addition to those named above, the following individuals made important contributions to this report: Nancy Purvine, Sara Margraf, Scott Spicer, John Mingus, Amy Buck, and Margaret Armen. No Child Left Behind Act: Improvements Needed in Education’s Process for Tracking States’ Implementation of Key Provisions. GAO-04-734. Washington, D.C.: September 30, 2004. No Child Left Behind Act: Additional Assistance and Research on Effective Strategies Would Help Small Rural Districts. GAO-04-909. Washington, D.C: September 23, 2004. Special Education: Additional Assistance and Better Coordination Needed among Education Offices to Help States Meet the NCLBA Teacher Requirements. GAO-04-659. Washington, D.C.: July 15, 2004. Student Mentoring Programs: Education’s Monitoring and Information Sharing Could Be Improved. GAO-04-581. Washington, D.C.: June 25, 2004. No Child Left Behind Act: More Information Would Help States Determine Which Teachers Are Highly Qualified. GAO-03-631. Washington, D.C.: July 17, 2003. Title I: Characteristics of Tests Will Influence Expenses: Information Sharing May Help States Realize Efficiencies. GAO-03-389. Washington, D.C.: May 8, 2003. Disadvantaged Students: Fiscal Oversight of Title I Could Be Improved. GAO-03-377. Washington, D.C.: February 28, 2003. Title I: Education Needs to Monitor States’ Scoring of Assessments. GAO-02-393. Washington, D.C.: April 1, 2002. Title I Funding: Poor Children Benefit Though Funding Per Poor Child Differs. GAO-02-242. Washington, D.C.: January 31, 2002. Title I Program: Stronger Accountability Needed for Performance of Disadvantaged Students. GAO/HEHS-00-89. Washington, D.C.: June 1, 2000.
The school choice provision of the No Child Left Behind Act (NCLBA) of 2001 applies to schools that receive Title I funds and that have not met state performance goals for 2 consecutive years, including goals set before the enactment of NCLBA. Students in such schools must be offered the choice to transfer to another school in the district. GAO undertook this review to provide the Congress a report on the first 2 years of the implementation of NCLBA school choice. GAO reviewed (1) the number of Title I schools and students that have been affected nationally, (2) the experiences of selected school districts in implementing choice, and (3) the guidance and technical assistance that Education provided. GAO collected school performance data from all states, interviewed Education officials, and visited 8 school districts in California, Illinois, Ohio, Mississippi, Pennsylvania, Tennessee, and Washington. About 1 in 10 of the nation's 50,000 Title I schools were identified for school choice in each of the first 2 years since enactment of the No Child Left Behind Act (NCLBA) of 2001. The proportion of schools identified for choice varied by state. About 1 percent of eligible children, or 31,000 students, transferred in school year 2003-2004. However, little is known about the students who did and did not transfer or factors affecting parents' transfer decisions. Education has launched a study that will yield some information on these topics. Officials in most of the 8 districts GAO visited said they welcomed NCLBA's emphasis on improved performance, but had difficulties providing choice because of tight timeframes and insufficient classroom capacity. Final state determinations of the schools that met state yearly performance goals were not generally available before the school year started, so offers of transfers were based on preliminary determinations. District officials expressed concern that parents had inadequate time and information to make an informed decision. Parents were offered at least 2 possible schools as transfer options, but many of these schools had not met state performance goals in the most recent year. Because of limited classroom capacity in 4 of the districts, some students did not receive the opportunity to transfer. For students who transferred, transportation was provided on school buses, public transit or personal cars, and most districts spent less than 7 percent of the pool of funds that NCLBA required be set aside for that purpose in school year 2003-2004. Education issued extensive guidance on choice. However, the complexity of providing school choice raises a number of issues that have not been addressed in guidance available through October 2004, such as how to handle cases where schools receiving transfers later are identified for choice and how to expand capacity in the short-term within budgetary constraints.
Under the fee-demonstration program, up to 100 sites per agency have been permitted to charge, collect, and establish recreation fees. The National Park Service and BLM have 100 sites each participating in the program, while the FWS and the Forest Service have 88 sites each.Because the program is a demonstration program, the conference committee encouraged the agencies to be innovative in designing and collecting fees and to coordinate their fees with other federal, state, and local recreational sites. Developing innovative fees and collection methods is a key objective of the program because the Congress viewed experimentation with fees as a way to improve customer service. Fee innovation was envisioned as charging different types of fees beyond simply charging fees for entering a site or using a facility or increasing fees that existed prior to the program. For example, fee innovation includes such things as basing fees on the length of stay or the season of the year visited. Innovative fee collection procedures were encouraged to provide visitors with a broader variety of payment options for recreation fees, such as using automated fee payment machines and credit or debit cards. Coordinating fees within and among the agencies, as well as with other nearby recreational sites, is also an important aspect of the program. Agencies are encouraged to work toward a seamless program by cooperating to eliminate inconsistent, duplicative, or overlapping fees that can confuse visitors or otherwise detract from the quality of service provided to them. Since fiscal year 1997, the four participating agencies have collected more than $600 million in the program. In fiscal year 2000, revenue collections totaled $186 million, with the National Park Service collecting over 75 percent of the total (see fig. 1). Being innovative is an important goal of the fee demonstration program. While some of the sites surveyed experimented with innovative types of fees and fee collection methods, room for improvement exists— particularly in the area of fee collection and coordination. Currently, many sites use traditional collection methods and have not adopted innovative practices that could improve the quality of service to the visiting public. Furthermore, frequently agencies are not pursuing opportunities to coordinate fees better among their own sites; with other agencies; or with other nearby, nonfederal recreational sites. As a result, existing fees are sometimes overlapping, duplicative, or confusing. The experimental nature of the fee demonstration program furnishes agencies with the opportunity to try different types of recreation fees. The agencies are expected to take advantage of this opportunity by trying different types of fees, rather than merely increasing existing entrance or user fees. Our survey found that overall about 25 percent of sites tried some form of innovative fees. The remaining sites—about 75 percent— continued with their traditional approaches for charging entrance and user fees. For purposes of our analysis, we defined fee innovation as doing more than taking a traditional approach to setting fees. Specifically, if sites made no changes to their fees or increased fees that were already in place when the program began, we did not consider them to be innovative. On the other hand, if sites used nontraditional approaches like basing fees on their visitors’ length of stay or offering fee incentives for visiting during off-peak periods, we considered the sites to be innovative. Such variable pricing, often referred to as differential pricing, offers visitors a greater range of recreational fee prices. It also enables agencies to manage visitation better during peak periods, to align fees better with the costs of providing services, and to help lessen overcrowding and the resulting negative effect on resources. The survey results show that 87 sites (about 25 percent of all sites surveyed) have experimented with some type of fee innovation. The remaining 259 sites (about 75 percent) in the program have not experimented with innovative fees. The extent of fee innovation varied considerably among the agencies (see fig. 2). The following examples illustrate the kind of fee innovations that have been used: 48 sites (14 percent of those surveyed) reported reducing fee prices during off-peak or shoulder seasons, such as the fall or spring. For example, BLM’s Upper Colorado River site, located in Colorado, has reduced its camping fees in the spring and fall, when fewer services are available. According to the fee manager, the site began this practice before the fee demonstration program to more closely align fees with the level of service provided. 35 sites (10 percent of those surveyed) reported using some other type of peak-period or differential pricing. For example, to help manage high visitation levels during the three summertime holiday weekends, the Forest Service’s Sand Lake Recreation Area in Oregon added $10 to its entrance fee and limits the number of off-highway vehicles to 1,200 vehicles for those weekends. These changes helped offset some of the administrative and staffing costs associated with increased holiday weekend use and provided an incentive to shift visits to non-holiday weekends, according to an agency official. Before the fee demonstration program, the area charged no additional fees, and increased operational costs during the holiday weekends were absorbed into the site’s existing budget, according to a Forest Service official. Each agency experimented to some extent with new or innovative entrance or user fees. We recognize that some innovative types of fees may not be practical or feasible at all locations. However, whether this degree of experimentation is acceptable in terms of achieving the results expected by agency managers cannot be determined because none of the agencies developed performance expectations or criteria for success. The fee demonstration program encouraged agencies to be innovative and improve visitor service by using modern, more convenient fee collection methods. Among the four agencies, a number of sites have used new or innovative approaches to collecting fees to improve visitor convenience, reduce collection costs, and improve the safety and security of employees collecting fees. However, over 60 percent of sites surveyed reported that there was little or only some difference in their fee collection methods for both entrance and user fees since the program began. These data suggest that much more can be done to offer visitors a wider variety of options for paying recreation fees. The agencies could accomplish this goal by more frequently adopting commonly used retail practices, such as using credit cards, where feasible. Our survey asked sites about their use of fee collection methods for both entrance and user fees during the demonstration program. These methods included more traditional methods, such as collecting fees at an entrance station or at an “iron ranger” fee tube—a metal tube which is used as a self-service payment station—as well as methods that could be considered innovative and more convenient for visitors when compared to the traditional practices used by the agencies: credit cards, automated fee payment machines, the Internet, 800 toll-free telephone numbers, and off-site vendor sales. We selected these five particular ones because they are collection and payment methods used everyday in retail, recreation, or entertainment industries. As tables 1 and 2 show, relatively few sites have experimented with the five innovative collection or payment methods for entrance and user fees. In addition, significant variations existed among agencies in the use of these methods. For example, while more than 40 percent of the Forest Service sites collected user fees via off-site vendor sales, only 6 percent of National Park Service sites used this method. The sites that did experiment with any of these collection practices generally found that they increased visitor convenience, reduced agency collection costs, and increased the safety and security of employees collecting fees. Using credit cards can make it easier for visitors to pay for multiple entrance or user fees simultaneously and can increase the safety of employees collecting fees by reducing cash handling. Nonetheless, while credit cards are ubiquitous in retail transactions, many of the demonstration sites have been reluctant to use them. For example, 30 percent of the surveyed sites that charge entrance fees and 14 percent of the surveyed sites that charge user fees accept credit cards for payment at an entrance station. While about half (33 of 69 sites that collect entrance fees) of the Park Service demonstration sites accept credit cards at an entrance station, a number of popular parks visited by millions annually, such as Yosemite National Park, do not accept credit cards. In commenting on a draft of this report, the Park Service stated that Yosemite National Park will begin accepting credit cards by the end of 2001. An October 2000 Forest Service review of fee demonstration sites in one of its regions questioned why it was so difficult for the agency to establish credit card acceptance and cited significant customer inconvenience because of this. The review noted that credit cards would also greatly reduce cash handling and improve employee safety. In addition, our survey results show that none of the top five FWS revenue sites, which accounted for 42 percent of the agency’s fiscal year 2000 total fee demonstration revenues, offer visitors the option of using credit cards for fee payment. Overall, while officials from the Interior agencies and the Forest Service agree that more can be done in this area, many times it may not be feasible for a number of reasons. For example, they said that credit cards may not be cost-effective at all sites and that the lack of adequate infrastructure, such as on-site power or phone lines in remote locations, prevented some sites from accepting credit cards. Another type of collection technique being used at some sites are automated fee payment machines similar to the automated teller machines used by banks and other financial institutions. With automated payment machines, visitors can pay a variety of fees, such as entrance, campground, or boat launch fees that can be paid with cash or credit cards, and the machines issue receipts showing the fees were paid. At BLM’s Imperial Sand Dunes Recreation Area, for example, a fee demonstration site in southern California, 17 automated fee payment machines are used to collect $10 for a weekly pass or $30 for an annual pass from users of off-highway vehicles (see fig. 3). Total fee revenue at this site in fiscal year 2000 was about $400,000, according to a BLM official. About 500,000 people use the 118,000-acre site each year, peaking at about 100,000 people during Thanksgiving weekend. According to a BLM official at the site, the battery operated fee machines, which are owned and maintained by a private contractor, are very convenient to use and accept both cash and credit cards for fee payment. Use of the machines has significantly reduced the number of agency staff required for fee collection at the sites where the machines are located. Despite the potential of automated fee machines to lower visitors’ waiting times during peak periods and the potential to reduce the agencies’ collection costs, only 8 percent of the 126 sites charging entrance fees and 8 percent of the 259 sites charging user fees employed such machines. According to Park Service and Forest Service officials, use of automated fee-payment machines may not always lower the cost of collection. In addition, officials from the two agencies said some sites are reluctant to purchase these machines because of the temporary nature of the fee demonstration program and the potential for vandalism when they are installed in remote locations. We recognize that automated fee payment machines may not be practical or cost effective at all demonstration sites such as those with low visitation or remote access. However, among those sites that have not installed automated payment machines for collecting entrance or user fees are several that have a high volume of visitors—each with over a million annually—including Acadia and Yellowstone National Parks; the Forest Service’s Sawtooth National Recreation Area in Idaho and Shasta-Trinity National Recreation Area in California; and the FWS’ Chincoteague National Wildlife Refuge in Virginia. In commenting on a draft of this report, the Park Service and the Forest Service noted that automated fee payment machines were not installed at some locations for various reasons. The Park Service said that such machines were not installed because power and telephone lines may not be available and because of the park’s desire to maintain a uniformed- ranger contact with visitors when paying fees. The Forest Service cited issues, such as sites having multiple points of entry, vandalism concerns, and the potentially short-term nature of the program. We agree that these are important considerations in helping decide whether to use these machines. However, in light of the very limited use of these machines to date—even at many high-visitation parks and forests—we believe that in the interest of improving visitor services and convenience, the agencies need to pursue all opportunities in this area. Another little-used technique is paying entrance or user fees over the Internet or via a toll-free telephone number. These techniques can also increase customer convenience, encourage less cash handling at individual sites, and lessen visitor delays during peak times. For example, camping and hiking permits for Paria Canyon-Coyote Buttes in Arizona, one of BLM’s demonstration sites, are sold via the Internet. Overnight camping in the Paria Canyon area and hiking in the Coyote Buttes area are each limited to 20 people a day. Using the Internet allows visitors to obtain information on the area, check on the availability of required camping and hiking permits for particular dates, make reservations, fill out and submit detailed application forms, and print out the application forms for mailing. A BLM official responsible for managing the program said the Internet payment method was very successful and that it accounted for about 80 percent of total permit sales at that site. Despite the common use—and convenience—of the Internet and a toll-free telephone number for conducting retail transactions today, only 2 percent of all sites surveyed used them for sales of entrance fees via the Internet, and 10 percent used the Internet for sales of user fees. Also, concerning 800 telephone number sales, only 2 percent of sites surveyed used it for sales of entrance fees and 11 percent used it for sales of user fees. Finally, using off-site vendors to collect entrance or user fees can be more convenient for the visitor and more efficient for the agency. In some instances, paying fees at a location inside a site may not always be convenient particularly if the site has no main entrance or has multiple access points, such as in many Forest Service recreation sites. In such situations, some sites have experimented with having small businesses, such as gas stations, grocery stores, fishing tackle stores, or other groups in the vicinity or adjacent to the site, collect entrance and user fees from visitors. For example, about 240 vendors sell passes to visitors for recreation in 17 national forests in Oregon and Washington State that participate in a fee demonstration project called the Northwest Forest Pass—a user fee for payment at developed recreation facilities and for trailhead parking. According to a Forest Service official, use of off-site vendor sales have reduced agency operational costs as well as improved visitor convenience. Despite the many advantages of off-site vendor sales, less than 15 percent of all sites use them for sales of either entrance or user fees, although 42 percent of Forest Service sites use this method for sales of user fees. National Park Service officials said that they do not use off-site sales as much as the Forest Service because unlike the Forest Service, many of their sites rely on entrance stations for fee collection. In commenting on a draft of this report, the Park Service reported that it is expanding vendor sales of passes. The Park Service also commented that it has implemented other types of innovations, such as computer-based cash register systems, electronic banking, and commercial-tour-fee vouchers. In addition, FWS commented that the report makes the assumption that more traditional fee collection methods equate to poorer customer service compared with more sophisticated higher-technology methods. While we recognize that existing fee collection methods may provide adequate customer service at some recreation sites, at others, especially those with high visitation, greater use of more innovative collection methods can improve visitor convenience, reduce collection costs, and improve the safety and security of employees collecting fees. The legislative history of the fee demonstration program emphasizes the need for the participating agencies to work together to minimize or eliminate confusion for visitors when overlapping or inconsistent fees are charged. In implementing the fee demonstration program, management in each agency has encouraged local site managers to coordinate fees to avoid such confusion. However, in the final analysis, whether coordination occurs is largely based on the desire and will of local site managers. The site managers responding to our questionnaire reported that about 30 percent of their sites—103 out of 346—began coordinating their fees with other federal, state, or local recreation sites after the fee demonstration program began. While coordination of fees may not be feasible at all recreation sites, there are many additional opportunities for addressing confusing fee situations by identifying and eliminating overlapping or inconsistent fees. Figure 4 shows the extent of coordination by each agency. So far, the coordination that has occurred has led to some successes when sites have worked together to better serve visitors by eliminating overlapping and inconsistent fees. The following examples illustrate how some sites have successfully avoided overlapping and inconsistent fees by simplifying fees to better serve visitors. Seventeen recreational sites along the Oregon coast accept the Oregon Pacific Coast Passport, which allows unrestricted access for entrance, day use or parking at each facility. These 17 sites are a combination of federal and state locations and include a site from the National Park Service, a BLM site, several Forest Service sites, and numerous state park sites. The per-vehicle pass is offered as either an annual pass ($35) or a 5-day pass ($10). This pass was initiated to reduce visitor confusion and frustration over having to pay a fee at each different agency managing the recreational sites along the Oregon Coast. Prior to the fee demonstration program, visitors were required to pay entrance or other fees at each site individually. The Idaho Department of Parks and Recreation and four federal agencies—the Bureau of Reclamation, the Park Service, the Forest Service, and BLM—-together offer the Visit Idaho Playgrounds pass. This per-vehicle pass covers entrance, trailhead and boating fees at over 100 recreational sites statewide and costs either $69 for an annual pass or $10 for a 5-day pass. The pass, which became available in December 2000, covers day-use fees but not camping and group fees. With the advent of the fee demonstration program, the statewide pass was created to address the state’s concerns about visitors having to pay so many separate fees. The Park Service’s Assateague Island National Seashore in Maryland and the FWS’ Chincoteague National Wildlife Refuge in Virginia are adjacent sites located on the same island bordering the Atlantic Ocean. Because of their proximity and their relatively remote location, they share many of the same visitors. To better accommodate the visitors, the managers at the sites developed a reciprocal fee arrangement whereby each site accepts the fee paid at the other site. Despite these examples of successful coordination efforts, there are still many opportunities where more coordination could improve the overall quality of service being offered to visitors by eliminating the confusing fee situations that still exist. For example, our survey results indicated that 30 percent of the sites responding to our questionnaire coordinated fees with other sites. In addition, only 17 percent of the responding sites coordinated fees with sites within their own agency. Even fewer sites coordinated with state and local governments: 9 and 3 percent, respectively. Limited fee coordination by the four agencies has permitted confusing fee situations to persist, both within and among the agencies. At some sites, an entrance fee may be charged for one activity whereas a user fee may be charged for essentially the same activity at a nearby site. For example, in Washington state, visitors entering either Olympic National Park or Olympic National Forest for day hiking are engaging in the same recreational activity—obtaining general access to federal lands—but are charged distinct entrance and user fees for the same activity. For a 1-day hike in Olympic National Park, users pay a $10 per-vehicle entry fee (good for 1 week), whereas hikers using trailheads in Olympic National Forest are charged a daily user fee of $5 per vehicle for trailhead parking. Also, holders of the interagency Golden Eagle Passport—a $65 nationwide pass that provides access to all federal recreation sites that charge entrance fees—are able to use it to enter Olympic National Park but are not able to use it to pay the Forest Service’s trailhead parking fee because that fee is a user fee. Such confusing and inconsistent fee situations also occur at sites within the same agency. For example, visitors to some Park Service national historic sites, such as the San Juan National Historic Site in Puerto Rico, pay an entrance fee and have access to all amenities at the sites, such as historic buildings. However, other Park Service historic sites, such as the Roosevelt/Vanderbilt Complex in New York State, charge no entrance fees but tours of the primary residences require payment of user fees. As a result, visitors who have purchased annual passes for entrance fees such as the Golden Eagle Passport or the Park Service’s National Parks Pass—a $50 annual pass that provides access to all Park Service sites that charge entrance fees—have access to the San Juan site but have to pay for the activities at the Roosevelt/Vanderbilt Complex. Other examples of this confusing situation involve fees charged for a variety of cave tours within the national park system. For self-guided cave tours at Carlsbad Caverns National Park in New Mexico and the Oregon Caves National Monument, either the Golden Eagle Passport or National Parks Pass is accepted for payment. However, at Mammoth Cave National Park in Kentucky, visitors must pay a user fee to take the self-guided cave tour, and the national entrance passes are not accepted. Several other Park Service sites–such as Jewel Cave and Wind Cave—also charge user fees for their cave tours and do not accept the national entrance passes for payment. In our view, comments made by one of the site managers in response to our questionnaire best sum up the current entrance and user fee situation. According to the fee manager at the Roosevelt/Vanderbilt Complex, “There is ongoing confusion as to what constitutes an entrance and a use . Some sites consider entering the grounds of the site the ‘entrance’ and others consider entering the ‘prime’ resource or historic home, etc., the entrance. The public at all levels are confused because the agencies apply the definitions differently—both between and among the agencies.” In commenting on a draft of this report, the Park Service acknowledged inconsistencies among Park Service fee demonstration sites that charge entrance and user fees. The agency stated that it was planning to implement recommendations from a recent consultant study that would reduce visitor confusion by using more consistent fees. To achieve the desired level of experimentation with different types of fees, improve the use of more up-to-date collection methods, and to foster more coordination among sites, management improvements are needed in three areas: performance expectations and measures, program evaluation and identification of best practices, and the resolution of interagency issues. Improvements in each of these areas could enhance the effectiveness of the program and better position the agencies for full-scale implementation of the program if it becomes permanent. The fee demonstration program legislation gave each of the agencies broad authority to implement the demonstration program. All four agencies chose to manage the program on a decentralized basis, giving local site managers considerable discretion in the way the program is implemented. To hold site managers, and the agencies, accountable for helping accomplish the goals of the program, performance expectations and measures that are consistent with program goals are critical. Having clear performance expectations and measures would clarify what site managers are to accomplish and provide a basis for judging performance and identifying areas needing improvement, both on a site-by-site basis and across the program as a whole. However, none of the agencies have developed performance expectations or measures. Without such guidance, it is not surprising that the majority of the demonstration sites have not experimented with new or additional types of entrance fees; used more contemporary, convenient collection methods on a broader scale; or more frequently coordinated fees with other recreation sites. As a result, there is no way to determine whether the level of innovation and coordination that has occurred at a site or throughout the agency is acceptable. Our findings are similar to what the four agencies reported to the Congress in January 1998. In providing a progress report on the fee demonstration program, the Department of the Interior and the Department of Agriculture stated that “. . . managers are often confused over what primary objective, if any, should take priority, or whether they should attempt to satisfy several objectives simultaneously.” In their January 2000 report to the Congress, the two departments emphasized the need to measure the results of the demonstration program. The 2000 report concluded that “the agencies continue to wrestle with the problem of how to measure . . . accomplishments and to communicate . . . successes in a meaningful way.” This need continues. None of the four agencies have implemented an effective performance measurement system. While the Forest Service has taken some steps to address its performance-management needs for this program by developing draft criteria for determining successful performance, the program is already 5 years old, and the agency does not plan to implement its performance criteria until January 2002. Today, after almost 5 years of experience with the program, the agencies have yet to complete systematic evaluations of the implementation of the program to identify what types of fees and fee collection practices work best. Performing such evaluations and developing knowledge of what the best practices are would enable agency managers to identify the most effective fees and collection practices to use should the program be permanently authorized, which would improve visitor service. The Department of the Interior and the Department of Agriculture, in their January 1998 report to the Congress, cited the importance of assessing the demonstration program. The report, among other things, identified the need to evaluate the effectiveness of the way various agencies approach fees as well as to determine the most effective modes to collect fees. Since the January 1998 report, however, no formal system has been developed to document, analyze, and exchange information on innovative fee approaches, fee collection methods, or the extent of coordination with other recreation sites in a consistent and systematic way. According to fee-demonstration program managers, the agencies have shared information on best practices through informal methods such as attendance at conferences and email communications. In commenting on a draft of this report, the Park Service acknowledged that no formal mechanism exists to share information, but it has several initiatives under way to address this issue. To its credit, the Forest Service performed evaluations of many individual sites as well as regional programs for several years. While these evaluations have been useful to agency managers, they have been general in nature, varied in scope from site to site, and not consistently focused on specific aspects of the fee demonstration program such as fee innovations, fee-collection practices, and coordination activities. Furthermore, these evaluations have not identified the best practices being used. Moreover, the Forest Service has no process in place for ensuring that recommendations made in its evaluation reports, if any, are implemented. BLM is also beginning to make progress in evaluating its program. It began site evaluations in March 2001 with plans to evaluate its major sites every 4 years. According to BLM officials, these evaluations will focus on the overall management of the program. Park Service officials told us that the agency’s regional offices and park units determine what, if any, audits of the fee demonstration program are performed. The Park Service has performed audits at some individual sites, generally involving cash- handling procedures, but no overall assessment of its fee demonstration program was completed. FWS conducted no formal systematic evaluations of its demonstration sites or its overall program. FWS officials told us that the high turnover in the agency’s fee demonstration program managers in Washington, D.C., resulted in substantial staff time devoted to recruiting and training fee managers, and as a result, no evaluation of fee programs was performed. In June 2000, the Senate Committee on Appropriations expressed similar concerns about the lack of program evaluation. In its report on 2001 fiscal year appropriations for Interior and related agencies, the Committee directed Interior and Agriculture to conduct an assessment of the demonstration program. The assessment is to address many of the same evaluation concerns discussed in this report, such as what criteria are used for evaluating the success of the program, and whether sites are coordinating to avoid multiple fee situations. The departments are now in the process of preparing their report. Since the demonstration program began 5 years ago, several interagency issues emerged that affected the implementation of the program and the quality of services provided to visitors. While the agencies have been aware of these issues for several years, little was done to successfully resolve them. The effective resolution of these interagency issues would require agreement, coordination, and consistency among the four participating agencies and two departments. However, no effective interagency mechanism is currently in place to ensure this resolution is accomplished. These conditions led to confusion among many visitors and have detracted from the overall quality of service provided by the program. Perhaps the best example of an interagency issue that needs to be addressed is the inconsistency and confusion surrounding the acceptance and use of the Golden Eagle Passport. This interagency pass costs $65 annually and is used by tens of thousands of visitors each year. Purchasers of the pass have unlimited access to federal recreational sites that charge an entrance fee. However, many sites do not charge entrance fees to gain access to a site; instead, they charge a user fee. For example, Yellowstone National Park, Acadia National Park, and the Eisenhower National Historic Site charge entrance fees. But sites like Wind Cave National Park, Steamtown National Historic Site, and the Delaware Water Gap National Recreation Area charge user fees for general access. If user fees are charged in lieu of entrance fees, the Golden Eagle Passport is generally not accepted even though, to the visitor with a Golden Eagle Passport, there is no practical difference. Our survey results showed that only about 10 percent of the 346 sites that responded to our survey accept the Golden Eagle Passport for a user fee activity, even though many of these sites have similar recreation activities as those charging an entrance fee. A number of site managers commented about how confused visitors were by the Golden Eagle Passport. The following comments are typical of these managers. Park Service site manager: “Visitors do not understand the difference between a user fee and an entrance fee and are upset when their Golden Eagles do not cover user fees. They understand paying for user fees such as camping and boat launch user fees but not the user fee that permits access to a site.” Forest Service site manager: “Sales of the Golden Eagle . . . do not provide the visitor with sufficient information as to where these passes are valid. Frequently, visitors become confused and angry when they attempt to use this pass at Forest Service sites where user fees are charged.” An interagency working group comprising the four agencies' fee-demonstration coordinators recognized the problem concerning confusion over the use of entrance versus user fees almost 4 years ago in their January 1998 report to the Congress. They pointed out that “In the absence of a clear understanding of the difference between entrance fees and user fees, the public may be uncertain why the Golden Eagle passport is accepted in some situations and locations and not in others.” The report also stated that a common definition of entrance fees is needed that can be applied consistently across all federal recreational facilities that accept the Golden Eagle passport. Despite these concerns, the matter remains unresolved. Further exacerbating the public’s confusion over payment of use or entrance fees was the implementation of the Park Service’s single-agency National Parks Pass in April 2000. This pass costs $50 annually and admits the holder, spouse, children, and parents to all National Park Service sites that charge an entrance fee. However, the Parks Pass does not admit the cardholder to Park Service sites that charge a user fee, nor is it accepted for admittance to other sites in the Forest Service and in the Department of the Interior, including BLM and Fish and Wildlife Service sites. According to a former coordinator of the Forest Service’s demonstration program, the Parks Pass removed the Park Service’s incentive to effectively work with other agencies to resolve the problem. However, the Park Service disagrees with this assertion. Another example of an interagency issue that needs to be addressed is the need to promote greater coordination of fees among nearby or adjacent sites. Situations in which inconsistent and overlapping fees are charged for similar recreational activities—such as at Olympic National Park/Olympic National Forest in Washington—need to be resolved in a way that offers visitors a more rational and consistent fee program. We made a similar point in our 1998 report on the program. In that report, we stated that further coordination among the agencies participating in the fee demonstration program could reduce confusion for visitors. We recommended that the Secretaries of the Interior and Agriculture direct the heads of the participating agencies to improve their services to visitors by better coordinating their fee collection activities under the Recreational Fee Demonstration Program. We also recommended that the agencies approach such an analysis systematically, first by identifying other federal recreation areas close to each of the demonstration sites and then, for each situation, determining whether a coordinated approach, such as a reciprocal fee arrangement, would better serve the visiting public. While the agencies have taken some steps to address this concern, our survey results show that much more could be done. These longstanding problems illustrate the need for all four agencies to make improvements in interagency communication, coordination, and consistency for the program to become visitor friendly. The extent of coordination that occurs is still left to local site managers. In our view, further fee coordination is not occurring because no effective mechanism exists to ensure that interagency coordination occurs or to resolve interagency issues or disputes when they arise. In commenting on a draft of this report, the Park Service stated that it has been working with other agencies on the acceptance of federal passes. However, there are no specific plans or time frames to resolve this issue. BLM, in commenting on a draft of this report, believed that there is an effective interagency mechanism to deal with cross-agency problems. However, we question the effectiveness of this mechanism because it has been almost 4 years since an interagency working group recognized the confusion over federal passes and visitors continue to be confused over the inconsistent acceptance of federal passes. Almost 5 years into the demonstration program, an imbalance is growing in fee revenues—high-priority needs at some lesser-visited sites go unfunded, while more heavily visited sites will be able to address their highest-priority needs and more. Many heavily visited sites in the fee demonstration program in the Park Service and the Forest Service generate a large amount of total fee revenues compared with other sites in these agencies and other sites in the BLM and the FWS. However, most of the revenue stays in the collecting units to address local needs, and these needs may not be the highest-priority needs facing the agency. This situation is particularly acute in the Park Service where fee revenue at 14 parks has effectively increased annual operating budgets by 50 percent or more. In fact, in several cases, such as at the Grand Canyon and Arches National Parks, operating budgets doubled resulting in a large pool of funds for addressing these parks’ needs. In our 1998 Recreational Fee Demonstration Program report, we suggested that the Congress might wish to consider modifying the current requirement that 80 percent of fee revenue be used in the units generating the revenues to allow for greater flexibility in using fee revenues. Many heavily visited sites in the fee demonstration program of the National Park Service and the Forest Service generate a large amount of fee revenues compared with other sites in these agencies and other sites in BLM and FWS. The total revenue collected by 42 of the 100 Park Service sites in the fee demonstration program amounted to $116 million in fiscal year 1999. This amount represented about 90 percent of all fee demonstration revenues collected by the Park Service during that year. The 42 sites retained 80 percent of the revenue they collected, or about $92.8 million. Furthermore, of these 42 sites, 14 retained fees that ranged from 50 percent to more than 100 percent of their fiscal year 1999 operating budgets. Three of these sites retained fee revenue that exceeded their annual operating budgets for that year. For example, Arches National Park retained fee-demonstration revenue of $1.4 million—156 percent of its $911,000 fiscal year 1999 operating budget, and Grand Canyon National Park retained fees of $19.5 million—116 percent of its $16.8 million operating budget for that year. In contrast, if the remaining 20 percent collected by the 42 sites that year ($23.2 million) were provided to the other 342 park units within the national park system, each unit would receive only about $68,000 for improving visitor services and program operations. The Forest Service also has many high-revenue sites. The total fee demonstration revenue collected by 17 of the 81 sites in the program amounted to $17.5 million, or 66 percent of the total amount collected by the Forest Service during fiscal year 1999. The 17 sites include about 50 national forests in the country. The Forest Service allows sites to retain 90 to 100 percent of fee demonstration revenues collected. Assuming all these 17 sites retained 90 percent of the fee revenue they collected and the balance of $1.75 million was made available to the other 105 national forests, each forest would receive only about $17,000 for improving visitor services and program operations. In commenting on a draft of this report, the Forest Service responded that our statement that the Forest Service has many high-revenue sites is misleading because several of those sites are on multiple forests and the revenue per forest is often modest as a percentage of appropriated funds. In this regard, the Forest Service noted that it does not have any sites in an “over funded” situation at this time. While some demonstration sites may have more needs than fee revenue can address, our concern is that the agency be provided with the flexibility to address its highest-priority needs first. As the Forest Service acknowledged in its comments, it has not determined its highest-priority needs. Compared to the Park Service and the Forest Service, the total fee demonstration revenue generated by BLM and the FWS was small— $5.2 million and $3.4 million, respectively. BLM had only 15 sites that each generated more than $100,000, and FWS had 6 sites that each generated more than $100,000. Revenues from the fee demonstration program may not always be used to meet the highest-priority needs of the two agencies that generate almost all of the fee revenue. We had previously found this to be the case in two prior reviews of the fee demonstration program. Furthermore, officials in the agencies participating in the fee demonstration program acknowledge that revenue from the program is not always spent on the highest-priority projects. This condition exists for two reasons. First, the National Park Service and the Forest Service do not maintain a centralized list of priority needs. As a result, the use of fee revenue is not based on an agencywide determination of priority needs. Second, 80 percent of fees collected must be used at the site where they were collected, and thus, sites that collect most of the revenue use it to meet their local needs even if these needs are minor in comparison with those at other locations where funding is not as plentiful. In accordance with this requirement, each of the demonstration sites within the Park Service retains 80 percent of fee revenue collected. The Forest Service allows each site to retain 90 to 100 percent of revenue. Since these agencies are retaining 80 to 100 percent of fee revenue at a site, agency officials consider some sites as “cash rich,” whereby they have high fee revenues to meet many needs while other sites have not been able to obtain sufficient revenue to meet their priority needs. We reviewed the use of fee revenues at high-revenue sites, at lower- revenue sites, and sites not in the fee demonstration program to determine how the revenues are being used. For example, during fiscal year 1999, Grand Canyon National Park retained about $19.5 million in fee demonstration revenue that it used to fund many projects, including $4.3 million to construct, repair, and rehabilitate restrooms parkwide and $3.6 million to rehabilitate a park headquarters building and convert a visitor center into administrative offices. Also, Castillo de San Marcos National Monument in Florida retained about $1.1 million in fee revenue— almost doubling its $1.2 million operating budget in fiscal year 1999—for use in funding several projects including $500,000 to replace deteriorated museum exhibits and $485,000 to construct a museum storage facility. We could not determine the extent to which these parks are using fee revenue to meet the agency’s highest-priority needs because the agency does not maintain a centralized list of priority needs. As a result, these parks have collected revenue to address many needs that may not always be the highest-priority needs within the national park system. According to the Park Service southeast regional coordinator, Castillo de San Marcos National Monument is currently using fee revenue to meet its deferred needs; however, in future years, given its high-fee revenue, retaining 80 percent of fee collections would not result in the most effective use of revenue because of higher-priority needs in lower-revenue sites and other sites not in the fee demonstration program. In contrast to the higher-revenue sites, some of the lower-revenue sites and sites not in the fee demonstration program have not been able to address their high-priority needs because of limited availability of fee demonstration revenues. For example, since fiscal year 1999, two non-fee demonstration park units—Pipe Spring National Monument in Arizona and Fort Union National Monument in New Mexico—have been unable to obtain a sufficient amount of the 20-percent fee revenue to install fire suppression systems to protect their primary historic structure and museum and valuable curatorial collections. Pipe Spring and Fort Union had requested $179,000 and $108,000, respectively, for these projects. According to officials in these two park units, they have received limited fee demonstration revenue to meet their priority needs. Officials from the four land management agencies in the fee demonstration program acknowledged that some sites with large fee revenues may eventually have more revenue than they need to meet their priority needs, while other lower-revenue sites in the program and sites not participating in the demonstration program may have limited or no fee revenues to meet their priority needs. For example, according to the January 1998 Interior and Agriculture report to the Congress on the fee demonstration program, “. . . it is possible that some key revenue-producing sites may quickly reduce their backlog projects and then be faced with accumulating large balances in their fee revenue accounts, funding projects that would rank low in priority compared to projects elsewhere in the agency, or searching for additional projects just to spend the money.” The report further states that “This could be a significant problem for an agency if, at the same time, there remain substantial backlogs at other agency sites that either have low visitation, or are not authorized to charge recreation fees.” The return of most of the revenue to the collecting sites for use in improving services and facilities is a key incentive for fee collection and for the high level of visitor support now enjoyed by the agencies. However, because a small percentage of sites generate a high percentage of the agencies’ total revenue, the agencies suggested, in the January 31, 2000, Recreational Fee Demonstration Program Progress Report to Congress that they needed increased flexibility in some situations to use more than 20 percent of the fees at sites other than where they were collected. They pointed out that this flexibility would result in a more efficient use of fee revenue to meet the highest-priority needs of the agencies. In commenting on a draft of this report, the National Park Service acknowledged that there is a need for flexibility in allocation formulas to ensure that fee revenue funding can be made available to parks with the greatest needs. It also stated that its revised project management system due in November 2001 will help to ensure that the priority needs of individual parks are identified and funded. Furthermore, the Forest Service as well as the Interior agencies stated back in 1998 that they would evaluate whether retaining 80 percent of fee revenue at the collecting sites would constitute a problem in the long run as the fee demonstration program progresses. Although the fee demonstration program has been in effect for over 5 years, such an evaluation has not been conducted. In our 1998 Fee Demonstration Program report, we stated that the Congress might wish to consider modifying the current requirement that 80 percent of fee revenue be used in the units generating the revenues to allow for greater flexibility in using fee revenues. If this requirement were changed, the agencies could consider various options that could result in a more equitable use of fee revenue, while at the same time maintaining incentives for collecting fees. For example, the agencies could allow sites to use an amount up to a specified maximum percentage amount of their operating budget (e.g., up to 60 percent of their operating budget). In commenting on a draft of this report, both the Park Service and the Forest Service agreed that the need exists for some flexibility in using fee revenues. However, they expressed concerns about our example of basing fee revenue allocations on operating budgets. The Park Service stated that it favors basing fee revenues on its proposed comprehensive approach that identifies the priority needs of parks while the Forest Service favors retaining 60 percent or more of fee revenue at the collecting site if the intent were to redistribute funds. Basing fee revenues on operating budgets is only one example of providing flexibility in using fee revenues. We recognize that several alternative approaches may result in a more equitable means of distributing such revenues. Essentially, the fee demonstration program is about raising revenue for the participating sites and using it to maintain and improve the quality of visitor services and the protection of resources at federal recreation sites. So far, the program has successfully raised a significant amount of revenue. However, our analysis indicates that the agencies can do more to improve the quality of visitor services it is providing. Without greater effort to adopt more modern and convenient fee collection practices, like credit cards or Internet sales, visitors to many sites will continue to be faced with limited payment options. Furthermore, unless more is done to eliminate the inconsistent fee situations that now exist, many visitors will continue to be confused about the fees they are being asked to pay. Until these conditions are addressed, the overall quality of the services provided to visitors and the overall quality of a visitor’s experience are diminished. Because each of the four participating agencies manage the program on a decentralized basis, local site managers have considerable latitude in determining how to implement the program. Under these circumstances, holding individual site managers accountable for accomplishing the goals of the program is imperative. To get this done, establishing performance expectations and measures that would clarify what individual site managers are to accomplish is critical. Yet, even though the program is now over 5 years old, this has not been done. Establishing performance expectations and measures on a site-by-site and agencywide basis would help improve the overall quality of visitor services by, among other things, making clear where improved collection practices should be used and where increased coordination should occur. Furthermore, the agencies have yet to complete systematic evaluations of the program to identify what types of fees and fee collection practices work best. Performing such evaluations and developing knowledge of what the best practices are will enable agency managers to identify the most effective fees and fee collection practices to use on a broader scale should the program be permanently authorized. Finally, although agency managers have been aware of a number of interagency issues for several years, little has been done to resolve them. The most obvious example of this involves the inconsistent application of entrance and user fees among the agencies. The effective resolution of these issues requires agreement, coordination, and consistency among the four participating agencies in two departments. However, no effective interagency mechanism is currently in place to ensure that this is accomplished. Concerning the revenue retention component of the demonstration program, the current legislation provides a financial incentive to establish and operate fee-collection programs, but it does not always provide the agencies with enough flexibility to address high-priority needs of low revenue recreation sites. In 1998, we suggested that the Congress might wish to consider modifying the current requirement that 80 percent of fee revenue be used in the units generating the revenues to allow for greater flexibility in addressing high-priority needs. We still believe that our earlier suggestion has merit. In order to improve the performance and effectiveness of the program, we recommend that the Secretaries of the Interior and Agriculture require the agency head for each of the participating agencies to develop specific program performance expectations and measurable performance criteria agencywide and for each participating site; develop and implement a process for conducting systematic evaluations of the program to identify which fee designs, collection methods, and coordination practices work best; and to disseminate the information to all participating sites; and develop an effective interagency mechanism to oversee and coordinate the program among the four agencies and resolve such interagency issues as developing standard definitions of “entrance” versus “user” fees. If congressional authorization is needed to accomplish this, then the agencies should seek the necessary legislation. We provided the Department of the Interior and the Department of Agriculture copies of a draft of this report for their review and comment. The Department of the Interior, including the three Interior agencies that participate in the fee demonstration program, and the Department of Agriculture generally agreed with the findings and the recommendations in the report. In addition, both departments provided us with additional clarifying and technical comments that we incorporated into the report as appropriate. Comments from the Department of the Interior are included in appendix II and comments from the Department of Agriculture are included in appendix III. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to the Chairman of the Subcommittee on National Parks, Historic Preservation, and Recreation of the Senate Committee on Energy and Natural Resources; the Secretary of the Interior; the Secretary of Agriculture; the Director, National Park Service; the Director, Bureau of Land Management; the Director, Fish and Wildlife Service; the Chief of the Forest Service; the Director, Office of Management and Budget; and other interested parties. We will make copies available to others upon request. This report will also be available on GAO’s home page at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841. Key contributors to this report were Lew Adams, Brian Estes, Cliff Fowler, Frank Kovalak, Luann Moy, and Paul Staley. To determine the extent to which the National Park Service, Forest Service, Bureau of Land Management, and Fish and Wildlife Service used innovative fees and fee collection practices and coordinated their approaches in managing the Recreational Fee Demonstration Program, we developed an automated survey instrument that we posted on GAO’s Web site. We sent email messages to the managers at all 365 fee-demonstration program sites which were collecting fees as of September 30, 2000, asking them to fill out the survey to provide us with information about how they were implementing the program at their site. Table 3 shows the total number of demonstration sites contacted and the rate of response. See http://www.gao.gov/cgi-bin/getrpt?gao-02-88SP for the entire questionnaire and the responses from the four agencies. During our design of the survey, we conducted two pretests with officials from each of the four agencies, for a total of eight pretests, to ensure that the officials understood the questions and could easily access and complete the questionnaire via GAO’s Web site. After each pretest, we made the necessary revisions to the questionnaire. Once completed, the electronic questionnaire was made available from March 9 to April 18, 2001, to all site managers via GAO’s Web site on the Internet. To ensure security and data integrity, we provided each manager with a password that would allow him or her to access and complete a questionnaire. To ensure the consistency and accuracy of our data, we conducted edits to verify that the appropriate questions on the questionnaire had been answered. Because of the time and cost involved in doing so, we did not independently verify the data that the site managers provided. However, we did review the questionnaire responses from six of the sites we visited (at least one in each agency) to ensure they were consistent with the information we obtained on the fee demonstration program at the time of our visit. To determine what, if any, management improvements can be made to enhance program performance and results, we analyzed the questionnaire survey results relating to the implementation and management of the fee demonstration program and discussed these issues with officials at the four agencies’ headquarters offices, and regional or state offices, as well as individual demonstration site managers. Table 4 identifies the demonstration sites that we visited. We selected individual sites because they were (1) in a previous fee demonstration review and warranted follow-up, (2) identified by agency officials as potential sites to visit, (3) experimenting with new fees or fee collection practices, and/or (4) geographically dispersed. At each location we obtained, reviewed, and analyzed supporting documentation, such as laws, regulations, and reports, on the fee demonstration sites. We also discussed recreation fee pricing and related issues with officials of state park agencies in Colorado, Idaho, Ohio, and Washington state and the National Association of State Park Directors. Furthermore, to gain a better understanding of their perspectives on the fee demonstration program, we contacted the following natural resource/recreation interest groups: America Outdoors, American Recreation Coalition, The Mountaineers, the National Parks Conservation Association, the National Park Foundation, and the Natural Resources Defense Council. Finally, to determine whether revenues from the fee demonstration program were being used to meet the agencies’ highest-priority needs, we obtained documentation on the fee revenue collected by the demonstration sites and the types of projects funded with fee revenues. We also discussed with headquarters, regional, and site officials the extent to which fee revenues were being used to meet the highest-priority needs of the sites and agencies. We reviewed Park Service documents to identify sites where fiscal year 1999 retained fee revenues had increased operating budgets by 50 percent or more. We compared these high-revenue sites with other sites within the Park Service that either were not in the fee demonstration program or had retained revenues representing less than 20 percent of their operating budgets (lower-revenue sites). For the Forest Service, we reviewed the 17 fee demonstration sites that generated the most fee revenue. We limited our review of this objective to these two agencies because they generate most of the fee demonstration revenue. We conducted our work from November 2000 through September 2001 in accordance with generally accepted government auditing standards.
Congress authorized the Recreational Fee Demonstration Program to help federal land management agencies provide high-quality recreational opportunities to visitors and protect resources. The program focuses on recreational activities at the following four land management agencies: the National Park Service, the Fish and Wildlife Service, the Bureau of Land Management, and the Forest Service. Under the fee demonstration program, participating agencies can collect fees at several sites and use them to (1) enhance visitor services, (2) address a backlog of needs for repair and maintenance, and (3) manage and protect resources. The agencies applied "entrance fees" for basic admission to an area and "user fees" for specific activities such as camping or launching a boat. Under the law, 80 percent of program revenue must be used at the site where it was collected. The rest may be distributed to other sites that may or may not be participating in the demonstration program. Some of the sites GAO surveyed experimented with innovative fee designs and collection methods, such as reducing fees during off-peak seasons and allowing visitors to use credit cards, but room for additional innovation exists, particularly in the areas of fee collection and coordination. The agencies also need to make improvement in three program management areas: evaluating their managers' performance in administering the fee program, developing information on which fee-collection and coordination practices work best, and resolving interagency management issues.
From its origins as a research project sponsored by the U.S. government, the Internet has grown increasingly important to American businesses and consumers, serving as the host for hundreds of billions of dollars of commerce each year. It is also a critical resource supporting vital services, such as power distribution, health care, law enforcement, and national defense. Similar growth has taken place in other parts of the world. The Internet relies upon a set of functions, called the domain name system, to ensure the uniqueness of each e-mail and Web site address. The rules that govern the domain name system determine which top-level domains (the string of text following the right-most period, such as .gov) are recognized by most computers connected to the Internet. The heart of this system is a set of 13 computers called “root servers,” which are responsible for coordinating the translation of domain names into Internet addresses. Appendix I provides more background on how this system works. The U.S. government supported the implementation of the domain name system for nearly a decade, largely through a Department of Defense contract. Following a 1997 presidential directive, the Department of Commerce began a process for transitioning the technical responsibility for the domain name system to the private sector. After requesting and reviewing public comments on how to implement this goal, in June 1998 the Department issued a general statement of policy, known as the “White Paper.” In this document, the Department stated that because the Internet was rapidly becoming an international medium for commerce, education, and communication, the traditional means of managing its technical functions needed to evolve as well. Moreover, the White Paper stated the U.S. government was committed to a transition that would allow the private sector to take leadership for the management of the domain name system. Accordingly the Department stated that the U.S. government was prepared to enter into an agreement to transition the Internet’s name and number process to a new not-for-profit organization. At the same time, the White Paper said that it would be irresponsible for the U.S. government to withdraw from its existing management role without taking steps to ensure the stability of the Internet during the transition. According to Department officials, the Department sees its role as the responsible steward of the transition process. Subsequently, the Department entered into an MOU with ICANN to guide the transition. ICANN has made significant progress in carrying out MOU tasks related to one of the guiding principles of the transition effort—increasing competition. However, progress has been much slower on activities designed to address the other guiding principles: increasing the stability and security of the Internet; ensuring representation of the Internet community in domain name policy-making; and using private, bottom-up coordination. Earlier this year, ICANN’s president concluded that ICANN faced serious problems in accomplishing the transition and needed fundamental reform. In response, ICANN’s Board established an internal committee to recommend options for reform. ICANN made important progress on several of its assigned tasks related to promoting competition. At the time the transition began, only one company, Network Solutions, was authorized to register names under the three publicly available top-level domains (.com, .net, and .org). In response to an MOU task calling for increased competition, ICANN successfully developed and implemented procedures under which other companies, known as registrars, could carry out this function. As a result, by early 2001, more than 180 registrars were certified by ICANN. The cost of securing these names has now dropped from $50 to $10 or less per year. Another MOU task called on ICANN to expand the pool of available domain names through the selection of new top-level domains. To test the feasibility of this idea, ICANN’s Board selected seven new top-level domains from 44 applications; by March 2002, it had approved agreements with all seven of the organizations chosen to manage the new domains. At a February 2001 hearing before a Subcommittee of the U.S. House of Representatives, witnesses presented differing views on whether the selection process was transparent and based on clear criteria. ICANN’s internal evaluation of this test was still ongoing when we finished our audit work in May 2002. Several efforts to address the White Paper’s guiding principle for improving the security and stability of the Internet are behind schedule. These include developing operational requirements and security policies to enhance the stability and security of the domain name system root servers, and formalizing relationships with other entities involved in running the domain name system. Recent reports by federally sponsored organizations have highlighted the importance of the domain name system to the stability and security of the entire Internet. A presidential advisory committee reported in 1999 that the domain name system is the only aspect of the Internet where a single vulnerability could be exploited to disrupt the entire Internet. More recently, the federal National Infrastructure Protection Center issued several warnings in 2001 stating that multiple vulnerabilities in commonly used domain name software present a serious threat to the Internet infrastructure. In recognition of the critical role that the domain name system plays for the Internet, the White Paper designated the stability and security of the Internet as the top priority of the transition. The MOU tasked ICANN and the Department with developing operational requirements and security policies to enhance the stability and security of the root servers—the computers at the heart of the domain name system. In June 1999, ICANN and the Department entered into a cooperative research and development agreement to guide the development of these enhancements, with a final report expected by September 2000. This deadline was subsequently extended to December 2001 and the MOU between ICANN and the Department was amended to require the development of a proposed enhanced architecture (or system design) for root server security, as well as a transition plan, procedures, and implementation schedule. An ICANN advisory committee, made up of the operators of the 13 root servers and representatives of the Department, is coordinating research on this topic. Although the chairman of the committee stated at ICANN’s November 2001 meeting that it would finish its report by February or March 2002, it had not completed the report as of May 2002. To further enhance the stability of the Internet, the White Paper identified the need to formalize the traditionally informal relationships among the parties involved in running the domain name system. The White Paper pointed out that many commercial interests, staking their future on the successful growth of the Internet, were calling for a more formal and robust management structure. In response, the MOU and its amendments included several tasks that called on ICANN to enter into formal agreements with the parties that traditionally supported the domain name system through voluntary efforts. However, as of May 2002, few such agreements had been signed. ICANN’s Board has approved a model agreement to formalize the relationship between the root server operators and ICANN, but no agreements had been reached with any of the operators as of May 2002. Similarly, there are roughly 240 country-code domains (2-letter top-level domains reserved mainly for national governments), such as .us for the United States. As with the root servers, responsibility for these domains was originally given by the Internet’s developers to individuals who served as volunteers. Although the amended MOU tasked ICANN with reaching contractual agreements with these operators, it has reached agreements with only 2 domain operators as of May 2002. Finally, the amended MOU tasked ICANN with reaching formal agreements with the Regional Internet Registries, each of which is responsible for allocating Internet protocol numbers to users in one of three regions of the world. The registries reported that progress was being made on these agreements, though none had been reached as of May 2002. Progress has also been slow regarding the other two guiding principles outlined in the White Paper, which call for the creation of processes to represent the functional and geographic diversity of the Internet, and for the use of private, bottom-up coordination in preference to government control. In order for the private sector organization to derive legitimacy from the participation of key Internet stakeholders, the White Paper suggested the idea of a board of directors that would balance the interests of various Internet constituencies, such as Internet service providers, domain name managers, technical bodies, and individual Internet users. The White Paper also suggested the use of councils to develop, recommend, and review policies related to their areas of expertise, but added that the board should have the final authority for making policy decisions. The Department reinforced the importance of a representative board in a 1998 letter responding to ICANN’s initial proposal. The Department’s letter cited public comments suggesting that without an open membership structure, ICANN would be unlikely to fulfill its goals of private, bottom-up coordination and representation. ICANN’s Board responded to the Department by amending its bylaws to make it clear that the Board has an “unconditional mandate” to create a membership structure that would elect at-large directors on the basis of nominations from Internet users and other participants. To implement these White Paper principles, the MOU between ICANN and the Department includes two tasks: one relating to developing mechanisms that ensure representation of the global and functional diversity of the Internet and its users, and one relating to allowing affected parties to participate in the formation of ICANN’s policies and procedures through a bottom-up coordination process. In response to these two tasks, ICANN adopted the overall structure suggested by the White Paper. First, ICANN created a policy-making Board of Directors. The initial Board consisted of ICANN’s president and 9 at-large members who were appointed at ICANN’s creation. ICANN planned to replace the appointed at-large Board members with 9 members elected by an open membership to reflect the diverse, worldwide Internet community. Second, ICANN organized a set of three supporting organizations to advise its Board on policies related to their areas of expertise. One supporting organization was created to address Internet numbering issues, one was created to address protocol development issues, and one was created to address domain name issues. Together these three supporting organizations selected 9 additional members of ICANN’s Board–3 from each organization. Thus, ICANN’s Board was initially designed to reflect the balance of interests described in the White Paper. Figure 1 illustrates the relationships among ICANN’s supporting organizations and its Board of Directors, as well as several advisory committees ICANN also created to provide input without formal representation on its Board. Despite considerable debate, ICANN has not resolved the question of how to fully implement this structure, especially the at-large Board members. Specifically, in March 2000, ICANN’s Board noted that extensive discussions had not produced a consensus regarding the appropriate method to select at-large representatives. The Board therefore approved a compromise under which 5 at-large members would be elected through regional, online elections. In October 2000, roughly 34,000 Internet users around the world voted in the at-large election. The 5 successful candidates joined ICANN’s Board in November 2000, replacing interim Board members. Four of the appointed interim Board members first nominated in ICANN’s initial proposal continue to serve on the Board. Parallel with the elections, the Board also initiated an internal study to evaluate options for selecting at-large Board members. In its November 2001 report, the committee formed to conduct this study recommended the creation of a new at-large supporting organization, which would select 6 Board members through regional elections. Overall, the number of at- large seats would be reduced from 9 to 6, and the seats designated for other supporting organizations would increase from 9 to 12. A competing, outside study by a committee made up of academic and nonprofit interests recommended continuing the initial policy of directly electing at-large Board members equal to the number selected by the supporting organizations. This committee also recommended strengthening the at- large participation mechanisms through staff support and a membership council similar to those used by the existing supporting organizations.Because of ongoing disagreement among Internet stakeholders about how individuals should participate in ICANN’s efforts, ICANN’s Board referred the question to a new Committee on ICANN Evolution and Reform. Under the current bylaws, the 9 current at-large Board seats will cease to exist after ICANN’s 2002 annual meeting, to be held later this year. Although the MOU calls on ICANN to design, develop, and test its procedures, the two tasks involving the adoption of the at-large membership process were removed from the MOU when it was amended in August 2000. However, as we have noted, this process was not fully implemented at the time of the amendment because the election did not take place until October 2000, and the evaluation committee did not release its final report until November 2001. When we discussed this amendment with Department officials, they said that they agreed to the removal of the tasks in August 2000 because ICANN had a process in place to complete them. Nearly 2 years later, however, the issue of how to structure ICANN’s Board to achieve broad representation continues to be unresolved and has been a highly contentious issue at ICANN’s recent public meetings. In addition, the amended MOU tasked ICANN with developing and testing an independent review process to address claims by members of the Internet community who were adversely affected by ICANN Board decisions that conflicted with ICANN’s bylaws. However, ICANN was unable to find qualified individuals to serve on a committee charged with implementing this policy. In March 2002, ICANN’s Board referred this unresolved matter to the Committee on ICANN Evolution and Reform for further consideration. In the summer of 2001, ICANN’s current president was generally optimistic about the corporation’s prospects for successfully completing the remaining transition tasks. However, in the face of continued slow progress on key aspects of the transition, such as reaching formal agreements with the root server and country-code domain operators, his assessment changed. In February 2002, he reported to ICANN’s Board that the corporation could not accomplish its assigned mission on its present course and needed a new and reformed structure. The president’s proposal for reform, which was presented to ICANN’s Board in February, focused on problems he perceived in three areas: (1) too little participation in ICANN by critical entities, such as national governments, business interests, and entities that share responsibility for the operation of the domain name system (such as root server operators and country- code domain operators); (2) too much focus on process and representation and not enough focus on achieving ICANN’s core mission; and (3) too little funding for ICANN to hire adequate staff and cover other expenditures. He added that in his opinion, there was little time left to make necessary reforms before the ICANN experiment came to “a grinding halt.” Several of his proposed reforms challenged some of the basic approaches for carrying out the transition. For example, the president concluded that a totally private sector management model had proved to be unworkable. He proposed instead a “well-balanced public-private partnership” that involved an increased role for national governments in ICANN, including having several voting members of ICANN’s Board selected by national governments. The president also proposed changes that would eliminate global elections of at-large Board members by the Internet community, reduce the number of Board members selected by ICANN’s supporting organizations, and have about a third of the board members selected through a nominating committee composed of Board members and others selected by the Board. He also proposed that ICANN’s funding sources be broadened to include national governments, as well as entities that had agreements with ICANN or received services from ICANN. In response, ICANN’s Board instructed an internal Committee on ICANN Evolution and Reform (made up of four ICANN Board members) to consider the president’s proposals, along with reactions and suggestions from the Internet community, and develop recommendations for the Board’s consideration on how ICANN could be reformed. The Committee reported back on May 31, 2002, with recommendations reflecting their views on how the reform should be implemented. For example, the committee built on the ICANN president’s earlier proposal to change the composition of the Board and have some members be selected through a nominating committee process, and to create an ombudsman to review complaints and criticisms about ICANN and report the results of these reviews to the Board. In other cases, the committee agreed with conclusions reached by the president (such as the need for increasing the involvement of national governments in ICANN and improving its funding), but did not offer specific recommendations for addressing these areas. The committee’s report, which is posted on ICANN’s public Web site, invited further comment on the issues and recommendations raised in preparation for ICANN’s June 2002 meeting in Bucharest, Romania. The committee recommended that the Board act in Bucharest to adopt a reform plan that would establish the broad outline of a reformed ICANN, so that the focus could be shifted to the details of implementation. The committee believed that this outline should be then be filled in as much as possible between the Bucharest meeting and ICANN’s meeting in Shanghai in late October 2002. As mentioned previously, the Department is responsible for general oversight of work done under the MOU, as well as the responsibility for determining when ICANN, the private sector entity chosen by the Department to carry out the transition, has demonstrated that it has the resources and capability to manage the domain name system. However, the Department’s public assessment of the status of the transition process has been limited in that its oversight of ICANN has been informal, it has not issued status reports, and it has not publicly commented on specific reform proposals being considered by ICANN. According to Department officials, the Department’s relationship with ICANN is limited to its agreements with the corporation, and its oversight is limited to determining whether the terms of these agreements are being met. They added that the Department does not involve itself in the internal governance of ICANN, is not involved in ICANN’s day-to-day operations, and would not intervene in ICANN’s activities unless the corporation’s actions were inconsistent with the terms of its agreements with the Department. Department officials emphasized that because the MOU defines a joint project, decisions regarding changes to the MOU are reached by mutual agreement between the Department and ICANN. In the event of a serious disagreement with ICANN, the Department would have recourse under the MOU to terminate the agreement. Department officials characterized its limited involvement in ICANN’s activities as being appropriate and consistent with the purpose of the project: to test ICANN’s ability to develop the resources and capability to manage the domain name system with minimal involvement of the U.S. government. Department officials said that they carry out their oversight of ICANN’s MOU-related activities mainly through ongoing informal discussions with ICANN officials. They told us that there is no formal record of these discussions. The Department has also retained authority to approve certain activities under its agreements with ICANN, such as reviewing and approving certain documents related to root server operations. This would include, for example, agreements between ICANN and the root server operators. In addition, the Department retains policy control over the root zone file, the “master file” of top-level domains shared among the 13 root servers. Changes to this file, such as implementing a new top-level domain, must first be authorized by the Department. In addition, the Department sends officials to attend ICANN’s public forums and open Board of Directors meetings, as do other countries and Internet interest groups. According to the Department, it does not participate in ICANN decision-making at these meetings but merely acts as an observer. The Department also represents the United States on ICANN’s Governmental Advisory Committee, which is made up of representatives of about 70 national governments and intergovernmental bodies, such as treaty organizations. The Committee’s purpose is to provide ICANN with nonbinding advice on ICANN activities that may relate to concerns of governments, particularly where there may be an interaction between ICANN’s policies and national laws or international agreements. The Department made a considerable effort at the beginning of the transition to create an open process that solicited and incorporated input from the public in formulating the guiding principles of the 1998 White Paper. However, since the original MOU, the Department’s public comments on the progress of the transition have been general in nature and infrequent, even though the transition is taking much longer than anticipated. The only report specifically called for under the MOU is a final joint project report to document the outcome of ICANN’s test of the policies and procedures designed and developed under the MOU. This approach was established at a time when it was expected that the project would be completed by September 2000. So far, there has been only one instance when the Department provided ICANN with a formal written assessment of the corporation’s progress on specific transition tasks. This occurred in June 1999, after ICANN took the initiative to provide the Department and the general public with a status report characterizing its progress on MOU activities. In a letter to ICANN, the Department stated that while ICANN had made progress, there was still important work to be done. For, example, the Department stated that ICANN’s “top priority” must be to complete the work necessary to put in place an elected Board of Directors on a timely basis, adding that the process of electing at-large directors should be complete by June 2000. ICANN made the Department’s letter, as well as its positive response, available to the Internet community on its public Web site. Although ICANN issued additional status reports in the summers of 2000 and 2001, the Department stated that it did not provide written views and recommendations regarding them, as it did in July 1999, because it agreed with ICANN’s belief that additional time was needed to complete the MOU tasks. Department officials added that they have been reluctant to comment on ICANN’s progress due to sensitivity to international concerns that the United States might be seen as directing ICANN’s actions. The officials stated that they did not plan to issue a status report at this time even though the transition is well behind schedule, but will revisit this decision as the September 2002 termination date for the MOU approaches. When we met with Department officials in February 2002, they told us that substantial progress had been made on the project, but they would not speculate on ICANN’s ability to complete its tasks by September 2002. The following week, ICANN’s president released his report stating that ICANN could not succeed without fundamental reform. In response, Department officials said that they welcomed the call for the reform of ICANN and would follow ICANN’s reform activities and process closely. When we asked for their views on the reform effort, Department officials stated that they did not wish to comment on specifics that could change as the reform process proceeds. To develop the Department’s position on the effort, they said that they are gathering the views of U.S. business and public interest groups, as well as other executive branch agencies, such as the Department of State; the Office of Management and Budget; the Federal Communications Commission; and components of the Department of Commerce, such as the Patent and Trademark Office. They also said that they have consulted other members of ICANN’s Governmental Advisory Committee to discuss with other governments how best to support the reform process. They noted that the Department is free to adjust its relationship with ICANN in view of any new mission statement or restructuring that might result from the reform effort. Department officials said that they would assess the necessity for such adjustments, or for any legislative or executive action, depending on the results of the reform process. In conclusion, Mr. Chairman, the effort to privatize the domain name system has reached a critical juncture, as evidenced by slow progress on key tasks and ICANN’s current initiative to reevaluate its mission and consider options for reforming its structure and operations. Until these issues are resolved, the timing and eventual outcome of the transition effort remain highly uncertain, and ICANN’s legitimacy and effectiveness as the private sector manager of the domain name system remain in question. In September 2002, the current MOU between the Department and ICANN will expire. The Department will be faced with deciding whether the MOU should be extended for a third time, and if so, what amendments to the MOU are needed, or whether some new arrangement with ICANN or some other organization is necessary. The Department sees itself as the responsible steward of the transition, and is responsible for gaining assurance that ICANN has the resources and capability to assume technical management of the Internet domain name system. Given the limited progress made so far and the unsettled state of ICANN, Internet stakeholders have a need to understand the Department’s position on the transition and the prospects for a successful outcome. In view of the critical importance of a stable and secure Internet domain name system to governments, business, and other interests, we recommend that the Secretary of Commerce issue a status report detailing the Department’s assessment of the progress that has been made on transition tasks, the work that remains to be done on the joint project, and the estimated timeframe for completing the transition. In addition, the status report should discuss any changes to the transition tasks or the Department’s relationship with ICANN that result from ICANN’s reform initiative. Subsequent status reports should be issued periodically by the Department until the transition is completed and the final project report is issued. This concludes my statement, Mr. Chairman. I will be pleased to answer any questions that you and other Members of the Subcommittee may have. For questions regarding this testimony, please contact Peter Guerrero at (202) 512-8022. Individuals making key contributions to this testimony included John P. Finedore; James R. Sweetman, Jr.; Mindi Weisenbloom; Keith Rhodes; Alan Belkin; and John Shumann. Although the U.S. government supported the development of the Internet, no single entity controls the entire Internet. In fact, the Internet is not a single network at all. Rather, it is a collection of networks located around the world that communicate via standardized rules called protocols. These rules can be considered voluntary because there is no formal institutional or governmental mechanism for enforcing them. However, if any computer deviates from accepted standards, it risks losing the ability to communicate with other computers that follow the standards. Thus, the rules are essentially self-enforcing. One critical set of rules, collectively known as the domain name system, links names like www.senate.gov with the underlying numerical addresses that computers use to communicate with each other. Among other things, the rules describe what can appear at the end of a domain name. The letters that appear at the far right of a domain name are called top-level domains (TLDs) and include a small number of generic names such as .com and .gov, as well as country-codes such as .us and .jp (for Japan). The next string of text to the left (“senate” in the www.senate.gov example) is called a second-level domain and is a subset of the top-level domain. Each top-level domain has a designated administrator, called a registry, which is the entity responsible for managing and setting policy for that domain. Figure 2 illustrates the hierarchical organization of domain names with examples, including a number of the original top-level domains and the country-code domain for the United States. The domain name system translates names into addresses and back again in a process transparent to the end user. This process relies on a system of servers, called domain name servers, which store data linking names with numbers. Each domain name server stores a limited set of names and numbers. They are linked by a series of 13 root servers, which coordinate the data and allow users to find the server that identifies the site they want to reach. They are referred to as root servers because they operate at the root level (also called the root zone), as depicted in figure 2. Domain name servers are organized into a hierarchy that parallels the organization of the domain names. For example, when someone wants to reach the Web site at www.senate.gov, his or her computer will ask one of the root servers for help. The root server will direct the query to a server that knows the location of names ending in the .gov top-level domain. If the address includes a sub-domain, the second server refers the query to a third server—in this case, one that knows the address for all names ending in senate.gov. This server will then respond to the request with an numerical address, which the original requester uses to establish a direct connection with the www.senate.gov site. Figure 3 illustrates this example. Within the root zone, one of the servers is designated the authoritative root (or the “A root” server). The authoritative root server maintains the master copy of the file that identifies all top-level domains, called the “root zone file,” and redistributes it to the other 12 servers. Currently, the authoritative root server is located in Herndon, Virginia. In total, 10 of the 13 root servers are located in the United States, including 3 operated by agencies of the U.S. government. ICANN does not fund the operation of the root servers. Instead, they are supported by the efforts of individual administrators and their sponsoring organizations. Table 1 lists the operator and location of each root server. Because much of the early research on internetworking was funded by the Department of Defense (DOD), many of the rules for connecting networks were developed and implemented under DOD sponsorship. For example, DOD funding supported the efforts of the late Dr. Jon Postel, an Internet pioneer working at the University of Southern California, to develop and coordinate the domain name system. Dr. Postel originally tracked the names and numbers assigned to each computer. He also oversaw the operation of the root servers, and edited and published the documents that tracked changes in Internet protocols. Collectively, these functions became known as the Internet Assigned Numbers Authority, commonly referred to as IANA. Federal support for the development of the Internet was also provided through the National Science Foundation, which funded a network designed for academic institutions. Two developments helped the Internet evolve from a small, text-based research network into the interactive medium we know today. First, in 1990, the development of the World Wide Web and associated programs called browsers made it easier to view text and graphics together, sparking interest of users outside of academia. Then, in 1992, the Congress enacted legislation for the National Science Foundation to allow commercial traffic on its network. Following these developments, the number of computers connected to the Internet grew dramatically. In response to the growth of commercial sites on the Internet, the National Science Foundation entered into a 5-year cooperative agreement in January 1993 with Network Solutions, Inc., to take over the jobs of registering new, nonmilitary domain names, including those ending in .com, .net, and .org, and running the authoritative root server. At first, the Foundation provided the funding to support these functions. As demand for domain names grew, the Foundation allowed Network Solutions to charge an annual fee of $50 for each name registered. Controversy surrounding this fee was one of the reasons the United States government began its efforts to privatize the management of the domain name system. Working under funding provided by the Department of Defense, a group led by Drs. Paul Mockapetris and Jon Postel creates the domain name system for locating networked computers by name instead of by number. Dr. Postel publishes specifications for the first six generic top-level domains (.com, .org, .edu, .mil, .gov, and .arpa). By July 1985, the .net domain was added. President Bush signs into law an act requiring the National Science Foundation to allow commercial activity on the network that became the Internet. Network Solutions, Inc., signs a 5-year cooperative agreement with the National Science Foundation to manage public registration of new, nonmilitary domain names, including those ending in .com, .net, or .org. President Clinton issues a presidential directive on electronic commerce, making the Department of Commerce the agency responsible for managing the U.S. government’s role in the domain name system. The Department of Commerce issues the “Green Paper,” which is a proposal to improve technical management of Internet names and addresses through privatization. Specifically, the Green Paper proposes a variety of issues for discussion, including the creation of a new nonprofit corporation to manage the domain name system. In response to comments on the Green Paper, the Department of Commerce issues a policy statement known as the “White Paper,” which states that the U.S. government is prepared to transition domain name system management to a private, nonprofit corporation. The paper includes the four guiding principles of privatization: stability; competition; representation; and private, bottom-up coordination. The Internet Corporation for Assigned Names and Numbers (ICANN) incorporates in California. ICANN’s by-laws call for a 19-member Board with 9 members elected “at-large.” The Department of Commerce and ICANN enter into an MOU that states the parties will jointly design, develop, and test the methods and procedures necessary to transfer domain name system management to ICANN. The MOU is set to expire in September 2000. ICANN issues its first status report, which lists ICANN’s progress to date and states that there are important issues that still must be addressed. ICANN and the Department of Commerce enter into a cooperative research and development agreement to study root server stability and security. The study is intended to result in a final report by September 2000. ICANN and the Department of Commerce approve MOU amendment 1 to reflect the roles of ICANN and Network Solutions, Inc. The Department of Commerce contracts with ICANN to perform certain technical management functions related to the domain name system, such as address allocation and root zone coordination. At a meeting in Cairo, Egypt, ICANN adopts a process for external review of its decisions that utilizes outside experts, who will be selected at an unspecified later date. ICANN also approves a compromise whereby 5 at- large Board members will be chosen in regional online elections. ICANN issues its second Status Report, which states that several of the tasks have been completed, but work on other tasks was still under way. At a meeting in Yokahama, Japan, ICANN’s Board approves a policy for the introduction of new top-level domains. The Department of Commerce and ICANN approve MOU amendment 2, which deleted tasks related to membership mechanisms, public information, and registry competition and extended the MOU until September 2001. They also agree to extend the cooperative research and development agreement on root server stability and security through September 2001. ICANN holds worldwide elections to replace 5 of the 9 interim Board members appointed at ICANN’s creation. At a meeting in California, ICANN selects 7 new top-level domain names: .biz (for use by businesses), .info (for general use), .pro (for use by professionals), .name (for use by individuals), .aero (for use by the air transport industry), .coop (for use by cooperatives), and .museum (for use by museums).
This testimony discusses privatizing the management of the Internet domain name system. This system is a vital aspect of the Internet that works like an automated telephone directory, allowing users to reach Web sites using easy-to-understand domain names like www.senate.gov , instead of the string of numbers that computers use when communicating with each other. The U.S. government supported the development of the domain name system, and, in 1997, the President charged the Department of Commerce with transitioning it to private management. The Department issued a policy statement, called the "White Paper," that defined the four guiding principles for the privatization effort as stability, competition, representation, and private, bottom-up coordination. After reviewing several proposals from private sector organizations, the Department chose the Internet Corporation for Assigned Names and Numbers (ICANN), a not-for-profit corporation, to carry out the transition. In November 1998, the Department entered into an agreement with ICANN in the form of a Memorandum of Understanding (MOU) under which the two parties agreed to collaborate on a joint transition project. Progress on and completion of each task is assessed by the Department on a case-by-case basis, with input from ICANN. The timing and eventual outcome of the transition remains highly uncertain. ICANN has made significant progress in carrying out MOU tasks related to one of the guiding principles of the transition effort--increasing competition--but progress has been much slower in the areas of increasing the stability and security of the Internet; ensuring representation of the Internet community in domain name policy-making; and using private bottom-up coordination. Although the transition is well behind schedule, the Department's public assessment of the progress being made on the transition has been limited for several reasons. First, the Department carries out its oversight of ICANN's MOU-related activities mainly through informal discussions with ICANN officials. Second, although the transition is past its original September 2000 completion date, the Department has not provided a written assessment of ICANN's progress since mid-1999. Third, although the Department stated that it welcomed the call for the reform of ICANN, they have not yet taken public position on reforms being proposed.
OGAC is responsible for establishing overall PEPFAR policy and program strategies and allocating funds from the Global Health and Child Survival account to PEPFAR implementing agencies, primarily CDC and USAID.These agencies execute PEPFAR program activities through agency headquarters offices and in-country interagency teams (PEPFAR country teams) and their implementing partners in the 33 countries and three regions with PEPFAR-funded programs as of fiscal year 2012. OGAC coordinates these activities through its approval of operational plans, which document work plans, budgets, and the anticipated results of HIV/AIDS-related programs. OGAC also provides annual guidance to PEPFAR country teams on how to develop and submit operational plans. For fiscal years 2009 through 2012, OGAC approved country operational plan budgets totaling over $16 billion. Country operational plan activities fit broadly in three areas: prevention, treatment, and care. Other program budget areas are laboratory infrastructure, strategic information, and health systems strengthening. To promote a more sustainable approach to combating HIV/AIDS, characterized by PEPFAR countries’ strengthened capacity, ownership, and leadership, the 2008 Leadership Act authorized the U.S. government to establish partnership frameworks with partner countries. These frameworks are 5-year joint strategic agreements for cooperation between the U.S. government and partner governments to combat HIV/AIDS in the partner country through technical assistance and support for service delivery, policy reform, and coordinated funding commitments. As of February 2013, the U.S. government had signed 22 PEPFAR partnership frameworks. According to OGAC guidance, a key expectation of the frameworks is that partner-country governments will become better prepared to assume primary responsibility for their responses to HIV/AIDS. Moreover, PEPFAR’s 2012 “blueprint” defines country ownership as the end state in which partner countries lead, manage, and coordinate the efforts needed to ensure that the AIDS response is effective, efficient, and durable. This is typically measured by CD4 (cluster of differentiation antigen 4) count in a sample of blood. CD4 cells are a type of white blood cell that fights infection. Along with other tests, the CD4 count helps determine the strength of the immune system, indicates the stage of the HIV disease, guides treatment, and predicts the disease’s progress. threshold in its laboratory criteria and by recommending treatment for all people coinfected with HIV and tuberculosis, thereby expanding the number of people eligible for treatment. Based on WHO guidelines, each country is expected to establish country-specific guidelines on when to initiate treatment for these groups. UNAIDS estimated at the end of 2011 that, on the basis of WHO’s 2010 guidelines, 15 million people in low- and middle-income countries needed treatment; of these, an estimated 8 million people are on treatment. People who are HIV positive but not yet eligible for treatment generally may seek access to care and support services as well as regular checkups and laboratory monitoring. People eligible for treatment should receive antiretroviral (ARV) drugs as well as checkups and monitoring to assess the effectiveness of treatment.People on treatment also receive various care and support services such as treatment of opportunistic infections including tuberculosis coinfection, nutritional support, and programs to promote adherence to treatment and remaining on treatment (patient retention). People on treatment are expected to take ARV drugs on a continuing, lifelong basis. . Prior to 2010, WHO’s guidelines recommended treatment for all people with CD4 counts of less than 200 cells/mm. reporting on PEPFAR results. According to the guidance, these indicators are intended to demonstrate progress in the fight against HIV/AIDS while also promoting responsible program management. In addition, among other things, the guidance establishes a distinction between national results and PEPFAR direct results. The guidance defines national results as achievements of all contributors to a partner country’s HIV/AIDS program and defines PEPFAR direct results as achievements of the PEPFAR program through its funded activities. (See app. II for a summary of OGAC criteria for assessing PEPFAR direct support.) With regard to treatment programs, the guidance instructs PEPFAR country teams providing direct support for treatment services to report to OGAC using the PEPFAR direct indicators. In addition, the guidance directed these country teams, as well as PEPFAR country teams providing technical assistance and other support to build partner- country capacity for managing treatment programs, to report on one national indicator. Table 1 summarizes these indicators. From fiscal year 2010 through 2012, OGAC reported PEPFAR results in terms of three primary indicators: (1) the number of people currently on treatment directly supported by PEPFAR (PEPFAR direct number of people on treatment), (2) the percentages of eligible people receiving treatment in partner countries (national treatment coverage rates), and (3) the percentage of adults and children known to be alive and on treatment 12 months after starting treatment (PEPFAR direct treatment retention rates). However, two of these indicators have limitations that could affect their usefulness. Regarding the first indicator, although the number of people on treatment directly supported by PEPFAR has increased significantly, this indicator alone does not provide complete information needed for assessing PEPFAR’s contributions to partner countries’ treatment programs. Regarding the third indicator, 10 PEPFAR country teams reported percentages of adults and children known to be alive and on treatment 12 months after starting treatment that exceeded 80 percent. However, the treatment retention data are not always complete and have other limitations, which OGAC acknowledged and is taking steps to address. In addition to these limitations, OGAC lacks a common set of indicators for monitoring quality assurance efforts. Although OGAC indicated in 2010 that it would establish a common set of indicators to monitor the results of PEPFAR’s efforts to improve the quality of treatment programs, it has not yet done so. Responding to treatment-related requirements in the 2008 Leadership Act, OGAC reports on the number of people currently on treatment directly supported by PEPFAR as a key indicator of program results. This number is calculated by determining the number of people who ever started treatment at facilities where PEPFAR directly supports treatment services, minus patients who died, stopped treatment, transferred out, or have unknown treatment outcomes. (See app. II for a summary of OGAC guidance on determining whether people can be counted as receiving direct services through PEPFAR.) PEPFAR met or exceeded annual targets for this indicator in fiscal years 2004 through 2012. Currently, this indicator is used to track treatment program expansion and to assess progress toward PEPFAR’s target of providing direct support for treatment for 6 million people by the end of fiscal year 2013. For fiscal year 2012, PEPFAR’s target for this indicator was 5 million people. According to data provided by OGAC, the number of people currently on treatment directly supported by PEPFAR has steadily increased from about 67,000 people in 11 countries in fiscal year 2004 to more than 5.1 million in 23 countries in fiscal year 2012. (See table 2.) Furthermore, the number of people on treatment directly supported by PEPFAR was about half of the total number of people on treatment in all low- and middle-income countries, which UNAIDS estimated at 8 million in 2011 (see fig. 1). As PEPFAR has begun to shift resources toward providing technical assistance and other support to help partner countries build their capacity to manage treatment programs, PEPFAR’s direct treatment indicator has increasingly fallen short of reflecting results of PEPFAR’s contributions to partner countries’ treatment programs. Specifically, this indicator does not reflect the expansion of partner countries’ treatment programs that PEPFAR technical assistance and other support for building treatment program management capacity have made possible. These efforts include, among others, activities such as implementing revised treatment guidelines, assisting partner-country district and national health officials with treatment facility oversight, and training and mentoring treatment facility staff. In several PEPFAR partner countries, for example, PEPFAR implementing partners providing direct treatment services have begun transferring stable patients to other treatment providers, including an often expanding number of local public and private health clinics, many of which receive PEPFAR-funded technical assistance and other support. In part because of this PEPFAR assistance, these providers also have begun increasing the number of people they enroll in treatment. In such cases, the PEPFAR direct treatment indicator may not account for these people. In the past, PEPFAR’s direct results often were equivalent to the national number of people receiving services, including treatment, according to PEPFAR’s 2010 Next Generation Indicators Reference Guide. However, as PEPFAR increases its efforts to build partner-country capacity to manage treatment programs through technical assistance and other support, PEPFAR’s direct treatment indicator alone does not provide complete information for assessing PEPFAR’s contributions to partner countries’ treatment programs. PEPFAR’s 2010 Next Generation Indicators Reference Guide noted that OGAC was working on a method for deriving PEPFAR direct results from partner-country national-level indicators but had not yet devised one. The guidance stated that the new method would take into account the percentage of PEPFAR funding that contributes to partner-country programs. In addition, OGAC and PEPFAR country teams have considered other factors to determine PEPFAR’s contribution to partner- country treatment programs. For example, in their fiscal year 2011 annual reports to OGAC, seven PEPFAR country teams reported the proportion of all treatment facilities receiving PEPFAR support or the percentage of all patients on treatment directly supported by PEPFAR. Some country teams noted that neither method fully accounted for PEPFAR’s contributions in these countries. As of February 2013, according to a senior OGAC official, OGAC had drafted a method for representing PEPFAR contributions based on proportional financial support to partner- country program results but had not finalized the method or revised its guidance to PEPFAR country teams. Increases in the number of people on treatment have helped improve partner countries’ national treatment coverage rates—generally defined as the percentage of eligible people receiving treatment. According to the most current UNAIDS and PEPFAR data, 8 of the 23 countries where PEPFAR directly supported treatment services in 2011 achieved estimated treatment coverage rates of 80 percent or more (see table 3).Although the remaining 15 countries fell short of this target, almost all of these countries have increased their estimated treatment coverage rates since 2009, according to our analysis of UNAIDS data. Since fiscal year 2010, OGAC has required PEPFAR country teams providing direct support for treatment services to track treatment retention rates, an indicator defined as the percentage of adults and children known to be alive and on treatment 12 months after starting treatment. In addition to being an essential indicator of treatment program outcomes— a higher retention rate indicates that more people on treatment are surviving—facilities’ retention rates are used by OGAC and PEPFAR country teams as a proxy indicator of treatment program quality. Of the 23 PEPFAR country teams directly providing treatment services, 20 provided data on this indicator in their fiscal year 2012 reports to OGAC. Ten of the 20 teams reported retention rates at or above 80 percent for facilities where PEPFAR implementing partners support direct treatment services. (See table 4.) However, PEPFAR patient retention data have several key limitations. The data are not always complete. PEPFAR’s reported retention rates reflect only the rates at facilities where PEPFAR directly supported treatment services and that were able to properly collect and report retention data. In addition, in their fiscal year 2012 reports to OGAC, three PEPFAR country teams reported that data for this indicator were not available. Several country teams noted problems in obtaining data from partner-country systems or from all sites where PEPFAR directly supports treatment services. Several country teams also reported concerns about data quality, including limited understanding of how to collect these data. Methods and definitions vary. For example, PEPFAR country teams accounted for patients transferring to or from treatment facilities differently. In addition, country teams used different definitions to count numbers of patients lost to follow-up (i.e., those with unknown outcomes, including possible death, treatment cessation, or self- transfer to another treatment facility). Under the current WHO definition, a patient may be considered lost to follow-up 90 days after the last scheduled appointment, but the definition may be adjusted depending on the stage of a patient’s treatment. Data on treatment retention are rarely available for key populations, including children and adolescents, injecting drug users, men who have sex with men, and sex workers. These populations are at higher risk for HIV infection and may face specific challenges that make it more difficult to retain them in treatment programs. Few data on long-term retention (after 24 months from the start of treatment) are available. Although OGAC guidance encourages PEPFAR country teams to use data for cohorts of patients to track retention and survival at 24, 36, and 48 months, PEPFAR does not have a retention indicator that extends beyond 12 months. OGAC officials stated that OGAC has taken several steps to improve the fiscal year 2012 PEPFAR treatment retention data. First, OGAC clarified guidance to PEPFAR country teams regarding how to calculate and report on this indicator. Second, PEPFAR implementing agencies conducted data quality assessments in three PEPFAR countries. As a result, according to OGAC officials, three more PEPFAR country teams were able to report on the treatment retention indicator in fiscal year 2012 than in fiscal year 2011. Furthermore, OGAC officials stated that data completeness is a priority for the current fiscal year 2013 and that they will help PEPFAR country teams with reporting retention data. In addition to routinely reported information on treatment program results, various studies of treatment programs, although they may not represent national treatment program conditions and may be limited by incomplete data, provide information that can be useful for improving the results of treatment programs. Appendix III contains examples of information provided by studies we identified. PEPFAR country teams engage in a number of activities, often characterized as technical assistance or support, whose aim is to assure the quality of treatment programs. Seventeen of the 22 countries with PEPFAR partnership frameworks identified efforts to improve treatment program quality as a key goal shared by partner countries and PEPFAR. However, OGAC has not established a common set of indicators to assess results of these activities. We identified several examples of quality assurance activities, such as developing and implementing partner-country quality improvement strategies, including roles and responsibilities for health facility supervision; establishing treatment site-level performance improvement plans, including quality improvement council meetings to identify solutions to problems affecting service quality; and training health facility managers and staff to track and use facility-level performance data. OGAC’s 2010 Next Generation Indicators Reference Guide sought to emphasize program quality indicators to help strengthen partner countries’ HIV/AIDS programs. To this end, the guidance added patient retention rate to the list of essential, reported treatment program indicators. The same guidance also recommended tracking data for several indicators of treatment program quality—for example, the number of patients with a documented CD4 or viral load test, the number of patients who have attended the recommended number of clinical visits, and the percentage of health facilities providing treatment using CD4 monitoring in line with partner-country guidelines or policies. In the three countries we visited, we found that PEPFAR implementing partners were using a wide range of indicators to report on their quality assurance activities. Examples of such indicators include percentages of (1) HIV-positive patients assessed for treatment eligibility, (2) patients on treatment who adhere to the dosing instructions and other requirements for taking ARV medicines, and (3) patients with good clinical outcomes. However, even where indicators in the three countries were generally the same, definitions varied slightly. For example, in reporting on the indicator of appointments kept, three PEPFAR implementing partners—all providing quality assurance assistance to treatment facilities in the same country—used two different definitions. One implementing partner reported on the percentage of HIV-positive patients who kept their appointments in the previous month or quarter, while the other two implementing partners reported on the percentage of HIV-positive patients who missed their appointments. OGAC’s 2010 Next Generation Indicators Reference Guide stated that additional guidance on quality assurance indicators for PEPFAR implementing agencies would be forthcoming. However, as of February 2013, OGAC has not issued this additional guidance. The lack of PEPFAR-wide guidance on quality assurance indicators and definitions inhibits development of standardized measurement tools used by PEPFAR country teams to monitor treatment facilities supported by PEPFAR and ultimately track the results of PEPFAR’s quality assurance efforts, including technical assistance and other support. To track the results of partner-country treatment programs and to help ensure that they are effective, PEPFAR supports countries’ monitoring and evaluation (M&E) systems. While some progress has been made in expanding and upgrading them, these systems often are unable to produce timely and complete treatment data, limiting their usefulness for managing programs and reporting program results. PEPFAR country teams fulfill many M&E functions at facilities where PEPFAR supports direct treatment services, and they assist partner countries in carrying out their M&E responsibilities by providing staff, training, and technical assistance and other support. Nevertheless, partner countries’ M&E systems continue to face a number of weaknesses. Consequently, PEPFAR country teams primarily use data drawn from PEPFAR-specific systems to report on PEPFAR treatment program results. OGAC has not yet issued guidance needed to support PEPFAR’s continued progress in transitioning to using partner-country M&E systems for program management and results reporting. Among other activities, OGAC technical guidance to PEPFAR country teams calls for the development of partner countries’ M&E systems, among other activities. Fully functioning M&E systems are essential for effective patient monitoring and patient management and also generate data that PEPFAR country teams and partner countries need to track treatment program results. All countries and regions with PEPFAR partnership frameworks identified strengthening M&E systems as a key goal shared by partner countries and PEPFAR. To support this goal, according to fiscal year 2012 PEPFAR operational plans, PEPFAR country teams provide, among other things, technical assistance, training, and staff to treatment facilities, district health offices, and national ministries of health to collect, aggregate, and report treatment program information through partner countries’ M&E systems (see sidebar). This support included recruiting, mentoring, and training health facility staff and district health officials responsible for collecting, analyzing, aggregating, and reporting data through the country’s M&E system. PEPFAR country teams were also assisting partner countries with conducting surveys and surveillance, such as those needed to estimate the number of people with advanced HIV infection. In addition, according to OGAC officials, as of February 2013, 15 PEPFAR partner countries have expressed interest in a single, open-source data system, and several of these countries have started implementation. This health information software tool supports collecting, analyzing, and reporting national health data, including data on treatment programs. According to these officials, PEPFAR will support partner-country expansion of the software tool to include PEPFAR and Global Fund reporting in two additional PEPFAR partner countries. PEPFAR country teams have documented weaknesses in a number of partner countries’ M&E systems, which are at various stages of implementation. Our review of PEPFAR country teams’ fiscal year 2011 annual reports to OGAC and fiscal year 2012 operational plans identified two key challenges for partner countries’ M&E systems. Partner countries’ M&E systems often are unable to produce complete and timely data, thus limiting their usefulness for patient, clinic, or program management. In their 2011 annual reports to OGAC, 12 PEPFAR country teams cited timeliness of partner-country treatment program data as a challenge, often because partner-country reporting time frames differed from the U.S. government fiscal year. In addition, three PEPFAR country teams noted that data provided by partner countries’ M&E systems were incomplete—not all provinces or treatment facilities reported data into the system. In addition, one PEPFAR country was not able to collect data on the number of patients currently receiving treatment, because the partner country provided data only on the cumulative number of patients who had ever started treatment; the country team noted that this had the likely effect of inflating the partner country’s treatment coverage rate. Furthermore, partner-country health officials, often lacking technical capacity, do not always use available data for decision making. Our review of the PEPFAR country teams’ operational plans found that 23 teams cited the need to improve data use at treatment facilities or other levels of the health care system. For example, one country team reported that partner-country health officials tended to focus on data collection for reporting rather than for policy, planning, and program decision making. Another country team reported that lack of data reporting by treatment facilities limited analysis of treatment patients across facilities, and a third country team noted that human resource limitations and weak research capacity impeded use of M&E data. In addition, studies of PEPFAR partner countries’ M&E systems that we identified provided additional information; appendix III provides a summary of this information. Because of the limitations associated with the data from national M&E systems, PEPFAR country teams primarily use data drawn from systems created specifically for reporting PEPFAR treatment program results. OGAC’s 2010 Next Generation Indicators Reference Guide states that PEPFAR country teams may need to rely on these systems in the short term but should continue working to integrate these systems into partner countries’ M&E systems. Our review of PEPFAR country teams’ operational plans found that PEPFAR country teams maintained PEPFAR program performance management systems to routinely collect, compile, and analyze patient monitoring and management data from the health facilities where PEPFAR directly supports treatment services. In addition, PEPFAR country teams use these data to generate their semiannual and annual reports to OGAC. PEPFAR country teams supplement information from their own systems with data from partner countries’ M&E systems, including numbers of patients on treatment and rates of treatment coverage. OGAC’s 2010 Next Generation Indicators Reference Guide recommends several indicators for tracking partner-country outcomes related to strengthening health systems, such as the existence of M&E plans and the percentage of health facilities with record-keeping systems for In addition, OGAC’s technical guidance monitoring HIV/AIDS programs.to country teams for developing partner countries’ M&E systems identifies a number of key efforts, such as developing M&E leadership and organizations, improving the policy environment, and ensuring the advancement and sustainability of technical capacity in PEPFAR partner countries. The technical guidance states that these efforts should support national capacity building. However, OGAC has not issued guidance identifying minimum standards that data generated by partner countries’ M&E systems should meet—such as standards related to completeness and timeliness—in order for PEPFAR country teams to assess, together with partner countries and other donors, whether the systems are ready for use in PEPFAR program management and results reporting. The lack of such standards leaves uncertain the point at which partner-country M&E systems are mature enough for PEPFAR to rely on them. This uncertainty is likely to delay achievement of PEPFAR’s goal of using partner-country M&E systems to generate data for PEPFAR treatment program management and reporting. Ensuring that PEPFAR treatment programs continue to improve and to operate as effectively as possible requires careful, complex monitoring and evaluation (M&E), not only of program results but also of quality assurance efforts. Available data on PEPFAR program results show some progress. In particular, PEPFAR and UNAIDS data indicate a steady increase in the number of people on treatment, improved treatment coverage rates, and high rates of patient retention at many facilities. However, PEPFAR’s contributions to expansion of partner countries’ treatment programs are not fully reflected in its results data because OGAC’s current method for deriving PEPFAR’s direct treatment indicator does not fully account for PEPFAR’s efforts to improve partner countries’ capacity to manage their treatment programs. This limits the usefulness of PEPFAR’s direct treatment indicator for assessing progress toward expanding partner-country treatment programs. Furthermore, OGAC has not yet established a common set of indicators to measure the results of PEPFAR technical assistance and other support intended to improve the quality of treatment programs. Lacking a standard set of quality assurance indicators, PEPFAR is limited in its ability to track the results of PEPFAR’s quality assurance efforts. As PEPFAR continues to shift responsibility for managing treatment programs to partner countries, those countries will need robust M&E systems to generate the data that are indispensable for ensuring effective treatment and efficient program management. PEPFAR has dedicated resources specifically for these efforts, but problems with untimely and incomplete data collection, as well as with data use, persist. As a result, PEPFAR has to rely on the M&E systems its implementing partners have developed rather than on country-managed systems for collecting and reporting results data. In its guidance, OGAC has not yet established minimum standards that data generated by partner countries’ M&E systems should meet in order for PEPFAR country teams to assess these systems. Without such standards, uncertainty remains as to when partner-country M&E systems will be ready to be integrated with PEPFAR systems, thus delaying the achievement of PEPFAR’s goal of using partner-country M&E system data for PEPFAR treatment program management and reporting. To ensure the outcomes and quality of treatment programs supported by PEPFAR, we recommend that the Secretary of State direct the U.S. Global AIDS Coordinator to take the following three actions in collaboration with PEPFAR implementing agencies: develop a method that better accounts for PEPFAR’s contributions to establish a common set of indicators to measure the results of treatment program quality improvement efforts; and establish a set of minimum standards for data generated by partner countries’ M&E systems, to enable PEPFAR country teams to assess those systems’ readiness for use in treatment program management and reporting. We provided a draft of this report to State, USAID, and CDC. Responding jointly with CDC and USAID, State provided written comments (see app. IV for a copy of these comments). State and CDC also provided technical comments and supplementary information relating to our findings and recommendations. In response to the technical comments, we incorporated changes to the draft report, as appropriate. After reviewing the supplementary information, we clarified our findings and recommendations relating to PEPFAR’s direct treatment indicator and its support for partner countries’ M&E systems. In its written comments, State generally agreed with our three recommendations. First, State affirmed that it supports our recommendation to develop a method for fully accounting for PEPFAR’s contributions to partner-country treatment programs. Observing that PEPFAR’s direct treatment indicator was intended to capture only essential components of direct treatment services, State noted that PEPFAR has recently begun an effort to revise its monitoring, evaluation, and reporting framework, including an expansion of indicators that would allow for implementing partners to report on their efforts to help partner countries build capacity and develop sustainable treatment programs. Second, State also agreed with the finding leading to our recommendation regarding the development of indicators to measure the results of treatment program quality improvement efforts. State cited the need for a harmonized PEPFAR strategy on treatment quality, including key indicators, and noted steps it is taking to develop such a strategy. In addition, stressing that treatment retention indicators are relatively new and difficult to operationalize, State detailed steps PEPFAR is taking to help improve treatment retention measurement, evaluation, and performance. Third, State noted that PEPFAR supports the strengthening of partner country reporting systems and works with partner countries to help them develop such systems, both to support national programs as well as to provide data for PEPFAR and other donors. As part of these efforts, PEPFAR also works with WHO and the Global Fund on system standardization and standards for data exchange. State specifically identified an indicator developed by WHO for documenting data completeness and timeliness and stated that this indicator can be used to monitor efforts to develop these reporting systems. We agree that such an indicator could be useful for PEPFAR country teams trying to determine when partner-country M&E systems are ready to be integrated with PEPFAR systems. However, we note that PEPFAR guidance does not instruct country teams to use the WHO indicator or any other indicator for this purpose. We believe that specifying one or more indicators for PEPFAR country teams to use is important to ensure a consistent approach to systems integration across the program. Doing this would emphasize for country partners the importance of harmonizing M&E systems for mutually beneficial purposes. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of State and the U.S. Global AIDS Coordinator. The report also will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3149 or [email protected] or contact Marcia Crosse at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. In this report, we examine the President’s Emergency Plan for AIDS Relief’s (PEPFAR) (1) treatment program results and how the Department of State’s (State) Office of the U.S. Global AIDS Coordinator (OGAC) measures them and (2) assistance to improve partner countries’ monitoring and evaluation (M&E) systems. To address both of these objectives, we collected and analyzed information from the following sources: interviews and fieldwork; guidance documents and past GAO work; PEPFAR partnership frameworks, operational plans, and performance reports; and studies of treatment programs. We interviewed officials from OGAC, the Department of Health and Human Services’ (HHS) Centers for Disease Control and Prevention (CDC), and the U.S. Agency for International Development (USAID) in Washington, D.C., and Atlanta, Georgia. We also conducted fieldwork in three PEPFAR partner countries—Kenya, South Africa, and Uganda—in June 2012 to obtain information on PEPFAR efforts to support partner- country treatment program outcomes and quality as well as challenges faced by PEPFAR implementing agencies and stakeholders. We selected these countries based on program size, availability of cost data, travel logistics, and other factors. We interviewed U.S. agency officials, representatives of key implementing partners, and partner government health officials and conducted visits to selected treatment facilities in these countries. We reviewed guidance provided to PEPFAR country teams by OGAC in collaboration with CDC, USAID, and other PEPFAR implementing agencies and previous GAO work to identify criteria related to requirements for collecting, validating, and reporting treatment program These guidance documents also provided results and their measures.criteria for examining PEPFAR assistance to improve partner countries’ M&E systems. Guidance documents included OGAC’s 2010 Next Generation Indicators Reference Guide, issued in August 2009; fiscal years 2011 and 2012 operational plan guidance and associated technical considerations; and annual and semiannual performance reporting guidance. We also consulted relevant guidance and reports issued by international organizations such as the United Nations Joint Programme on HIV/AIDS (UNAIDS) and the World Health Organization (WHO). These included Global Report: UNAIDS Report on the Global AIDS Epidemic, 2012 and Organizing Framework for a Functional National HIV Monitoring and Evaluation System as well as WHO’s 2010 Antiretroviral Therapy for HIV Infection in Adults and Adolescents: Recommendation for a Public Health Approach and 2006 Patient Monitoring Guidelines for HIV Care and Antiretroviral Therapy. We also reviewed WHO’s 2011 Retention in HIV Programmes: Defining the Challenges and Identifying Solutions. We examined PEPFAR operational plans, performance reports, and partnership frameworks. First, to identify key PEPFAR treatment program goals—including those related to treatment program supervision, M&E systems, and quality assurance—we reviewed partnership framework agreements between the United States and 22 PEPFAR partner countries Next, to identify ongoing and planned PEPFAR activities, as and regions. As of February 2013, the United States had signed partnership frameworks with the following 22 partner countries and regions: Angola, Botswana, the Caribbean region, the Central American region, the Democratic Republic of the Congo, the Dominican Republic, Ethiopia, Ghana, Haiti, Kenya, Lesotho, Malawi, Mozambique, Namibia, Nigeria, Rwanda, South Africa, Swaziland, Tanzania, Ukraine, Vietnam, and Zambia. narrative descriptions to OGAC regarding (1) the number of adults and children with advanced HIV infection who are currently receiving treatment directly supported by PEPFAR (PEPFAR direct number of people currently on treatment); (2) the percentage of adults and children with advanced HIV infection who are receiving treatment in partner countries’ treatment programs (national coverage rates); and (3) the percentage of adults and children known to be alive and on treatment 12 months after starting treatment directly supported by PEPFAR (PEPFAR direct treatment retention rates). We reviewed and summarized the 23 country teams’ narrative descriptions accompanying each of these three PEPFAR indicators for information related to treatment program results and assistance to improve PEPFAR partner countries’ M&E systems. In the case of the number of people on treatment directly supported by PEPFAR, to show changes over time, we analyzed aggregate data for fiscal years 2004 through 2012 provided by OGAC, which it derived from PEPFAR country teams’ semi-annual and annual reports. We previously have reviewed OGAC guidance and procedures for collecting, analyzing, In addition, to identify factors considered by and assessing these data.PEPFAR country teams when reporting on PEPFAR treatment program results, we reviewed the narrative descriptions provided in these PEPFAR country teams’ fiscal year 2011 annual reports. On the basis of these reviews, as well as interviews with OGAC officials, we determined that the PEPFAR data on number of people on treatment were sufficiently reliable for reporting totals rounded to the nearest hundred. In addition, to illustrate PEPFAR’s contribution to the number of people on treatment in low- and middle-income countries, we obtained data on the numbers of people on treatment in all low- and middle-income countries from the UNAIDS website ( 2011. UNAIDS sets international standards for these data, which it collects from national governments and uses to report on estimated numbers of people on treatment; therefore, we deemed them sufficiently reliable for the purposes of our reporting. Data on the number of people on treatment in low- and middle-income countries were not available for 2012 at the time of this report’s publication. To be able to illustrate PEPFAR’s contribution to the number of people on treatment in low- and middle-income countries in 2012, we derived a rough estimate based on ) for calendar years 2004 through changes in PEPFAR’s and UNAIDS’s reported numbers of people on treatment from 2010 to 2011. We observed that the number of people on treatment directly supported by PEPFAR made up about half of the increase in the number of people on treatment in low- and middle-income countries in these years. For fiscal years 2011 to 2012, the number of people on treatment directly supported by PEPFAR increased by about 1.2 million; if we had assumed that this number continued to make up half of the increase in low- and middle-income countries in these years, the estimated increase would have been 2.4 million. Thus, our estimate of an increase of about 1.5 million people on treatment from 2011 to 2012— leading to an estimate of 9.5 million people on treatment in low- and middle-income countries in those years—represents a conservative projection of possible scenarios. In the case of partner countries’ treatment coverage rates, we analyzed data provided in UNAIDS’s 2012 Global Report: UNAIDS Report on the Global AIDS Epidemic as well as data available for 2009 through 2011 on UNAIDS’s website (www.unaids.org). UNAIDS sets international standards for these data, which it collects from national governments and uses to report on national treatment coverage rates. On the basis of review of treatment coverage rates reported by PEPFAR country teams to OGAC, as well as discussions with OGAC officials, we determined that UNAIDS’s treatment coverage data were the most complete and current data available and were thus sufficiently reliable for the purposes of our reporting. In addition to information provided by PEPFAR country teams and reported by OGAC, we also drew on information from a selected set of studies of treatment programs and M&E systems in PEPFAR partner countries. We used the information from these studies to identify illustrative examples of factors affecting treatment program outcomes and M&E system strengths and weaknesses. Appendix III contains examples of information provided by studies we reviewed. First, although we did not intend to develop an exhaustive list of all available studies, we took a number of steps to identify relevant and up- to-date studies assessing treatment programs and M&E systems in PEPFAR countries. These studies included: (1) studies collected under our previous review of evaluations of PEPFAR programs; (2) studies provided by PEPFAR country teams in the three countries we visited; (3) articles published in special issues of Health Affairs and the Journal of Acquired Immune Deficiency Syndromes dedicated to PEPFAR programs, both published in 2012; and (4) a citations review for PEPFAR public health evaluations, evaluations provided by CDC headquarters, and relevant articles appearing in Health Affairs and the Journal of Acquired Immune Deficiency Syndromes from 2009 through 2012. We identified more than 200 studies addressing our objectives. To perform our review, we first reviewed the studies’ titles and abstracts to categorize each study according to one or more topics, such as M&E systems or treatment program retention. Focusing on studies that fell into categories related to M&E systems and to treatment program retention and patient-level outcomes, we then reviewed key sections—such as findings and conclusions—of each study to identify common themes. We also targeted studies that addressed topics covered in the body of the report, such as factors affecting treatment program retention and loss to follow-up and strengths and weaknesses of M&E systems. Having identified subsets of relevant studies, we reviewed them in more depth to verify our initial judgments about the studies’ findings and to select illustrative examples of the themes we had identified. This additional review included the development of narrative work papers synthesizing and categorizing more detailed findings and results from the selected studies. We then presented these findings and examples in appendix III. Our analysis is not a summary of the full set of studies we identified in the initial phase but rather a presentation of several key, high-level results derived from a select set of studies in both areas, supported with citations to illustrative studies. We conducted this performance audit from May 2012 to April 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Department of State’s (State) Office of the U.S. Global AIDS Coordinator’s (OGAC) Next Generation Indicators Reference Guide, effective beginning in fiscal year 2010 and updated in February 2013, provides a list of indicators for setting targets, monitoring results, and reporting to OGAC. The guidance distinguishes between President’s Emergency Plan for AIDS Relief (PEPFAR) direct indicators and national indicators. PEPFAR direct indicators describe the results of PEPFAR programs through its funded activities. National indicators describe the achievements of all contributors—including public and private sector organizations and other donors—to a partner country’s HIV/AIDS programs. Furthermore, OGAC’s guidance provides a checklist for determining whether a site-specific service supported by PEPFAR can be counted in reporting on PEPFAR direct indicators. According to the guidance, to be characterized as PEPFAR direct, an activity must fulfill at least one criterion in each of the two panels shown in table 5. If an activity meets at least one of each set of criteria, PEPFAR support is assumed to be direct and to likely provide sufficient support for claiming 100 percent of the site- specific results. If an activity meets a criterion in only one of the panels, PEPFAR support may be insufficient for claiming 100 percent of site- specific results. In that case, country teams must (1) determine whether there is sufficient justification to claim the results as direct and, if there is, (2) justify the method used to estimate the appropriate fraction of the total commensurate with PEPFAR support to the site and (3) document the estimation procedures used. In addition to analyzing information provided by President’s Emergency Plan for AIDS Relief (PEPFAR) country teams and reported by the Department of State’s (State) Office of the U.S. Global AIDS Coordinator (OGAC), we also performed reviews of a selected set of studies. Although these studies may not represent national treatment program conditions and may be limited by incomplete data, they provide additional information related to treatment retention and patient-level health outcomes, as well as strengths and weaknesses of partner-country monitoring and evaluation (M&E) systems. (See app. I for information on how we identified and used these studies for the purposes of our reporting.) Treatment retention rates reported by PEPFAR country teams indicate some progress but have certain limitations related to data completeness and varying methods and definitions. Treatment program studies—which use treatment facility data not routinely reported by PEPFAR country teams to OGAC—also provide information about treatment retention, adherence, and patient loss-to-follow-up. Several studies, although not representative of national program conditions, identified factors associated with patient loss-to-follow-up, such as advanced illness and personal economics. They also noted measures that could potentially reduce the number of patients lost to follow-up and thus increase retention rates, such as starting treatment earlier, shortening the distance to health care facilities, expanding personal outreach by community health volunteers, and using mobile phone messaging for patient follow-up. In addition, several studies that we reviewed identified interventions found to be successful for reducing the occurrence of drug resistance and treatment failure, including (1) increased outreach to patients to increase adherence to medications and (2) effective supervision and training of lower-level health facility staff. Other studies identified factors that may limit the positive health effects of treatment, such as poor access to nutrition and clean water. Such findings can serve as the basis for adopting new treatment program policies and management approaches. Incomplete data may limit studies’ ability to fully examine patient-level health outcomes and the factors that affect them. For example, we identified one study that reported some positive adult treatment program outcomes, based on a nationally representative sample of treatment facilities in Rwanda, but noted incomplete data on patient weight and CD4 cell count, among other limitations. Another study on the prevalence of drug-resistant HIV strains noted that very few published data on drug resistance are available, particularly for the HIV strains that tend to be prevalent in low- and middle-income countries. Devising mitigation strategies for drug resistance can be difficult without these data. Likewise, one study noted a lack of data on potential adverse effects of treatment on the growth and development of pediatric patients. Studies of partner countries’ M&E systems note that some progress has been made in expanding and upgrading these systems. The studies cite increased use of electronic systems for health information management and reporting. For example, one study found that electronic systems, where in use, enabled health facilities to report on health indicators more easily as well as to support patient and facility management. Nevertheless, these studies also found that partner countries’ M&E systems are unable to produce timely and complete treatment data, thus limiting their usefulness for patient, clinic, or program management. These studies cited several challenges, including the following: Human resource limitations can hinder the full use of M&E system data. For example, one study noted that health facility staff lacked trained data managers to regularly analyze basic data. Data may not always be analyzed when reported from the facility to the district; and data may not always be disseminated properly, or may not be used in decision making. For example, one study noted limited access to data and lack of capacity as factors negatively affecting data use. Health facility staff may prioritize data reporting to districts, national ministries, and donors over data use, including use for improving the quality of treatment services. For example, one study found that data were used for decision making at 38 percent of health facilities reviewed and in 44 percent of districts reviewed. In addition to the contacts named above, Jim Michels (Assistant Director), Todd M. Anderson, David Dayton, Brian Hackney, and Grace Lui made key contributions to this report. In addition, the following GAO staff provided technical assistance and other support: Sada Aksartova, Chad Davenport, David Dornisch, Lorraine Ettaro, Katherine Forsyth, Kay Halpern, Erika Navarro, and Jane Whipple. President’s Emergency Plan for AIDS Relief: Per-Patient Costs Have Declined Substantially, but Better Cost Data Would Help Efforts to Expand Treatment. GAO-13-345. Washington, D.C.: March 15, 2013. President’s Emergency Plan for AIDS Relief: Agencies Can Enhance Evaluation Quality, Planning, and Dissemination. GAO-12-673. Washington, D.C.: May 31, 2012. President’s Emergency Plan for AIDS Relief: Program Planning and Reporting. GAO-11-785. Washington, D.C.: July 29, 2011. Global Health: Trends in U.S. Spending for Global HIV/AIDS and Other Health Assistance in Fiscal Years 2001-2008. GAO-11-64. Washington, D.C.: October 8, 2010. President’s Emergency Plan for AIDS Relief: Efforts to Align Programs with Partner Countries’ HIV/AIDS Strategies and Promote Partner Country Ownership. GAO-10-836. Washington, D.C.: September 20, 2010. President’s Emergency Plan for AIDS Relief: Partner Selection and Oversight Follow Accepted Practices but Would Benefit from Enhanced Planning and Accountability. GAO-09-666. Washington, D.C.: July 15, 2009. Global HIV/AIDS: A More Country-Based Approach Could Improve Allocation of PEPFAR Funding. GAO-08-480. Washington, D.C.: April 2, 2008. Global Health: Global Fund to Fight AIDS, TB and Malaria Has Improved Its Documentation of Funding Decisions but Needs Standardized Oversight Expectations and Assessments. GAO-07-627. Washington, D.C.: May 7, 2007. Global Health: Spending Requirement Presents Challenges for Allocating Prevention Funding under the President’s Emergency Plan for AIDS Relief. GAO-06-395. Washington, D.C.: April 4, 2006. Global Health: The Global Fund to Fight AIDS, TB and Malaria Is Responding to Challenges but Needs Better Information and Documentation for Performance-Based Funding. GAO-05-639. Washington, D.C.: June 10, 2005. Global HIV/AIDS Epidemic: Selection of Antiretroviral Medications Provided under U.S. Emergency Plan Is Limited. GAO-05-133. Washington, D.C.: January 11, 2005. Global Health: U.S. AIDS Coordinator Addressing Some Key Challenges to Expanding Treatment, but Others Remain. GAO-04-784. Washington, D.C.: June 12, 2004. Global Health: Global Fund to Fight AIDS, TB and Malaria Has Advanced in Key Areas, but Difficult Challenges Remain. GAO-03-601. Washington, D.C.: May 7, 2003.
PEPFAR, first authorized in 2003, has supported significant advances in HIV/AIDS prevention, treatment, and care in more than 30 countries. In reauthorizing the program in 2008, Congress directed OGAC to continue to expand the number of people receiving care and treatment through PEPFAR while also making it a major policy goal to help partner countries develop independent, sustainable HIV programs. As a result, PEPFAR began shifting efforts from directly providing treatment services toward support for treatment programs managed by partner countries. GAO was asked to review PEPFAR treatment programs. GAO examined (1) PEPFAR treatment program results and how OGAC measures them and (2) PEPFAR assistance to improve partner countries' M&E systems. GAO reviewed PEPFAR plans, performance reports, and guidance and interviewed officials from OGAC, the Centers for Disease Control and Prevention (CDC), and the U.S. Agency for International Development (USAID). GAO also synthesized findings of treatment program studies and conducted fieldwork in three countries. The Department of State's (State) Office of the U.S. Global AIDS Coordinator (OGAC) has reported on President's Emergency Plan for AIDS Relief (PEPFAR) treatment program results primarily in terms of (1) numbers of people on treatment directly supported by PEPFAR, (2) percentages of eligible people receiving treatment, and (3) percentages of people alive and on treatment 12 months after starting treatment. However, these indicators do not reflect some key PEPFAR results. First, although the number of people on treatment directly supported by PEPFAR grew from about 1.7 million to 5.1 million in fiscal years 2008 through 2012, this indicator alone does not provide complete information needed for assessing PEPFAR's contributions to partner countries' treatment programs. Second, although 10 PEPFAR country teams reported that percentages of people alive and on treatment after 12 months exceeded 80 percent, data for this indicator are not always complete and have other limitations. To improve these data, according to OGAC officials, OGAC clarified its guidance and conducted data quality assessments. However, OGAC has not yet established a common set of indicators to monitor the results of PEPFAR's efforts to improve the quality of treatment programs. As PEPFAR partner countries assume greater responsibility for managing their treatment programs, fully functioning monitoring and evaluation (M&E) systems are critical for tracking results and ensuring treatment program effectiveness. PEPFAR country teams assist partner countries in carrying out their M&E responsibilities by providing staff, training, technical assistance, and other support. With this assistance, partner countries have made some progress in expanding and upgrading these M&E systems. Nevertheless, partner countries' M&E systems often are unable to produce complete and timely data, thus limiting the usefulness of these data for patient, clinic, or program management. OGAC has not yet established minimum standards for partner countries' M&E systems, particularly relating to data completeness and timeliness, in order for PEPFAR country teams to assess those systems' readiness for use in treatment program management and results reporting. The Secretary of State should direct OGAC to (1) develop a method that better accounts for PEPFAR's contributions to partner-country treatment programs, (2) establish a common set of indicators to measure the results of treatment program quality improvement efforts, and (3) establish a set of minimum standards for data generated by partner countries' M&E systems. Commenting jointly with CDC and USAID, State generally agreed with the report's recommendations.
The Border Patrol developed its 2004 Strategy following the terrorist attacks of September 11, 2001, as a framework for the agency’s new priority mission of preventing terrorists and terrorist weapons from entering the United States and to support its traditional mission of preventing aliens, smugglers, narcotics, and other contraband from crossing U.S. borders illegally. The 2004 Strategy was designed to facilitate the buildup and deployment of agency and border resources and to consolidate the agency into a more centralized organization. Border Patrol headquarters officials stated that the 2012-2016 Strategic Plan will rely on Border Patrol and federal, state, local, tribal, and international partners working together to use a risk-based approach to secure the border that uses the key elements of “Information, Integration, and Rapid Response” to achieve Border Patrol strategic objectives. Our past reviews of border security programs contained information on the progress and challenges related to implementing these key elements. Our observations are as follows. Obtaining Information Necessary for Border Security. Critical to implementation of the 2004 Strategy was the use of intelligence to assess risk, target enforcement efforts, and drive operations, according to the strategy. As part of their intelligence efforts, CBP and Border Patrol worked to develop and deploy the next generation of border surveillance and sensoring platforms to maximize the Border Patrol’s ability to detect, respond, and interdict cross-border illegal activity. Border Patrol headquarters officials reported that the new 2012-2016 Strategic Plan also has a focus on information that provides situational awareness and intelligence developed by blending technology, reconnaissance, and sign- cutting and tracking, to understand the threats faced along the nation’s borders. Our prior work reviewing CBP’s efforts to deploy capabilities to, among other things, provide situational awareness along U.S. borders provides insights that could inform Border Patrol considerations in implementing its new strategic plan. As of fiscal year end 2010, Border Patrol reported having substantial detection resources in place across 45 percent of the nation’s border miles. The remaining 55 percent of border miles—primarily on the northern and coastal borders—were considered vulnerable due to limited resource availability or inaccessibility, with some knowledge available to develop a rudimentary border control strategy. Our review of Border Patrol 2012 operational assessments also showed concerns about resource availability to provide the information necessary to secure the border. Across Border Patrol’s 20 sectors located on the northern, southwest, and southeast coastal borders, all sectors reported a need for new or replacement technology used to detect and track illegal activity, and the majority (19) reported a need for additional agents to maintain or attain an acceptable level of border security. Additionally, 12 sectors reported a need for additional infrastructure. DHS, CBP, and Border Patrol are continuing to focus attention on development, acquisition, and deployment of technology and infrastructure needed to provide the information necessary to secure the borders, with priority for the southwest border. Our past work highlighted the continuing challenges the agency faced implementing technology and infrastructure at the U.S. land borders. Technology. We previously reported that in January 2011, after 5 years and a cost of nearly $1 billion, DHS ended the Secure Border Initiative Network (SBInet), a multi-year, multi-billion-dollar technology effort aimed at securing U.S. borders because it did not meet cost- effectiveness and viability standards. DHS developed a successor plan to secure the border—the Alternative (Southwest) Border Technology plan—where CBP is to focus on developing terrain- and population-based solutions utilizing existing, proven technology, such as camera-based surveillance systems, for each border region beginning with high-risk areas in Arizona. In November 2011, we reported that CBP’s planned technology deployment plan for the Arizona border, the Arizona Border Surveillance Technology Plan, was expected to cost approximately $1.5 billion over 10 years. However, we also reported that CBP did not have the information needed to fully support and implement the technology deployment plan in accordance with DHS and Office of Management and Budget guidance, among other things.DHS determine the mission benefits to be derived from implementation of the plan and develop and apply key attributes for metrics to assess program implementation. DHS concurred with our recommendation and reported that it planned to develop a set of measures to assess the effectiveness and benefits of future technology investments. Infrastructure. In May 2010, we testified that CBP had not accounted for the effect of its investment in border fencing and infrastructure on Border fencing was designed to impede people on border security. foot and vehicles from crossing the border and to enhance Border Patrol’s ability to detect and interdict violators. CBP estimated that border fencing and other infrastructure had a life-cycle cost of about $6.5 billion for deployment, operations, and maintenance. CBP reported a resulting increase in control of southwest border miles, but could not account separately for the effect of the border fencing and other infrastructure. In a September 2009 report, we recommended that CBP conduct an analysis of the effect of tactical infrastructure on border security.with the Homeland Security Institute (HSI)—a federally funded research and development center—to analyze the effect of tactical CBP concurred and reported that it had contracted infrastructure on the security of the border.had not provided an update on this effort. As of May 2012, CBP Integrating Border Security Operations with Federal, State, Local, Tribal, and International Partners. Leveraging the law enforcement resources of federal, state, local, tribal, and international partners was a key element of Border Patrol’s 2004 Strategy and Border Patrol’s implementation of the strategy, on the northern and coastal borders where Border Patrol had fewer resources relative to the size of the geographic area, and on the southwest border where Border Patrol used the assistance of law enforcement partners to conduct surge operations in high-priority areas. Border Patrol headquarters officials stated that integration of border security operations will be a key element of the 2012-2016 Strategic Plan across all borders. Our prior work reviewing coordination among various stakeholders with responsibilities for helping to secure the border provides insights for consideration as Border Patrol transitions to its new strategic plan. We previously reviewed Border Patrol efforts to coordinate law enforcement resources across partners on the northern border and on federal border lands. On the northern border, we reported in December 2010 that federal, state, local, tribal, and Canadian partners operating in four Border Patrol sectors we visited stated that efforts to establish interagency forums were beneficial in establishing a common understanding of border security status and threats, and that joint operations helped to achieve an integrated and effective law enforcement response. However, numerous partners cited challenges related to the inability to resource the increasing number of interagency forums and raised concerns that some efforts may be overlapping. We found that DHS did not oversee the interagency forums established by its components. Further, we also reported that while Border Patrol and other federal partners stated that federal agency coordination to secure the northern border was improved, partners in all four sectors we visited cited long-standing and ongoing challenges sharing information and resources for daily border security related to operations and investigations. Challenges were attributed to continued disagreement on roles and responsibilities and competition for performance statistics used to inform resource allocation decisions. DHS established and updated interagency agreements designed to clarify roles and responsibilities for agencies with overlapping missions or geographic areas of responsibility, but oversight by management at the component and local levels had not ensured consistent compliance with provisions of these agreements. We previously reported that governmentwide efforts to strengthen interagency collaboration have been hindered by the lack of agreement on roles and responsibilities and agency performance management systems that do not recognize or reward interagency collaboration. Thus, we recommended, among other things, that DHS provide guidance and oversight for interagency forums established or sponsored by its components and provide regular oversight of component compliance with the provisions of interagency Memorandum of Understandings. DHS concurred with our recommendation and stated that the structure of the department precluded DHS-level oversight, but that it would review the inventory of interagency forums through its strategic and operational planning efforts to assess efficiency. DHS officials stated that in January 2012 the department established an intercomponent Advisory Council to address our recommendation that DHS provide oversight of compliance with interagency agreements. We also reported in December 2010 that while there is a high reliance on law enforcement support from partners on the northern border, the extent of law enforcement resources available to address border security vulnerabilities was not reflected in Border Patrol’s processes for assessing border security and resource requirements. We previously reported that federal agencies should identify resources among collaborating agencies to deliver results more efficiently and that DHS had not fully responded to a legislative requirement to link initiatives— including partnerships—to existing border vulnerabilities to inform federal resource allocation decisions. Development of policy and guidance to integrate available partner resources in northern border security assessments and resource planning documents could provide the agency and Congress with more complete information necessary to make resource allocation decisions in mitigating existing border vulnerabilities. Thus, we recommended that DHS direct CBP to develop policy and guidance necessary to identify, assess, and integrate the available partner resources in northern border sector security assessments and resource planning documents. DHS concurred with our recommendation and has taken action to formulate new policy and guidance in associated strategic planning efforts. In our November 2010 report on interagency coordination on northern federal borderlands in Border Patrol’s Spokane sector and southwest federal borderlands in Border Patrol’s Tucson sector, we reported, among other things, that Border Patrol, DOI, and USDA had established forums and liaisons to exchange information. However, while information sharing and communication among these agencies had increased in recent years, critical gaps remained in implementing interagency agreements to share intelligence information and compatible secure radio communications for daily border security operations. We reported that coordination in these areas could better ensure officer safety and an efficient law enforcement response to illegal activity. In addition, there was little interagency coordination to share intelligence assessments of border security threats to federal lands and develop budget requests, strategies, and joint operations to address these threats. We reported that interagency efforts to implement provisions of existing agreements in these areas could better leverage law enforcement partner resources and knowledge for more effective border security operations on federal lands. Thus, we recommended that DHS, DOI, and USDA take the necessary action to further implement interagency agreements. The departments concurred with our recommendation. In response, Border Patrol issued a memorandum to all Border Patrol sectors emphasizing the importance of USDA and DOI partnerships to address border security threats on federal lands. While this action is a positive step toward implementing our recommendation, we continue to believe that DHS should take additional steps necessary to monitor and uphold implementation of the existing interagency agreements, including provisions to share intelligence and resource requirements for enhancing border security on federal lands. Mobilizing a Rapid Response to Border Security Threats. One of the elements of Border Patrol’s 2004 National Strategy was to improve the mobility and rapid deployment of personnel and resources to quickly counter and interdict threats based on shifts in smuggling routes and tactical intelligence. CBP reported expanding the training and response capabilities of the Border Patrol’s specialized response teams to support domestic and international intelligence-driven and antiterrorism efforts as well as other special operations. Border Patrol headquarters officials stated that “Rapid Response,” defined as the ability of Border Patrol and its partners to quickly and appropriately respond to changing threats, will also be a key element of the 2012-2016 Strategic Plan; and in fiscal year 2011, Border Patrol allocated agent positions to provide a national group of organized, trained, and equipped Border Patrol agents who are capable of rapid movement to regional and national incidents in support of priority CBP missions. Our prior work and review of Border Patrol’s 2012 operational assessments provide observations that could inform Border Patrol’s transition to and implementation of its new strategic plan. Our review of Border Patrol 2012 operational assessments showed that Border Patrol sectors had used resources mobilized from other Border Patrol sectors or provided by law enforcement partners to maintain or increase border security. Border Patrol, for example, mobilized personnel and air assets from Yuma sector to neighboring Tucson sector, which cited that the coordination of operational activities was critical to the overall success of operations. Similarly, National Guard personnel and resources have been used to bridge or augment Border Patrol staffing until new agents are trained and deployed. The Department of Defense (DOD) estimated costs of about $1.35 billion for National Guard support of DHS’s border security mission in the four southwest border states (California, Arizona, New Mexico, and Texas) from June 2006 through September 30, 2011. However, Border Patrol headquarters officials stated that they had not fully assessed to what extent the augmented mobile response resources would be sufficient to preclude the need to redeploy personnel and resources needed to secure higher-priority border locations at the expense of lower-priority locations, or changes in the type or continued need of resources from its law enforcement partners. Within Border Patrol, for example, our review of the 2012 operational assessments showed that Border Patrol reported difficulty maintaining border control in areas from which resources have been redeployed. Border Patrol stations within six of the nine southwest border sectors have reported that agent deployments to other stations have affected their own deployment and enforcement activities. The DHS goal and measure of operational control used in conjunction with the 2004 Strategy provided oversight of five levels of border control that were based on the increasing availability of information and resources, which Border Patrol used to detect, respond, and interdict illegal cross-border activity either at the border or after entry into the United States (see table 1). The top two levels—”controlled” and “managed”—reflect Border Patrol’s reported achievement of “operational control,” in that resources were in place and sufficient to detect, respond, and interdict illegal activity either at the immediate border (controlled level) or after the illegal entry occurs (managed level), sometimes up to 100 miles away. The remaining three levels reflected lower levels of border control, where Border Patrol has less ability to detect, respond to, or interdict illegal activity due to insufficient resources or inaccessibility. DHS reported achieving operational control for 1,107 (13 percent) of 8,607 miles across U.S. northern, southwest, and coastal borders at the time it discontinued use of this performance goal at the end of fiscal year 2010 (see fig. 1). Nearly 80 percent of border miles Border Patrol reported to be under operational control were on the U.S. southwest border with Mexico. Border Patrol sector officials assessed the miles under operational control using factors such as operational statistics, third-party indicators, intelligence and operational reports, resource deployments and discussions with senior Border Patrol agents. Our analysis of the 1,107 border miles Border Patrol reported to be under operational control showed that about 12 percent were classified as “controlled,” which was the highest sustainable level for both detection and interdiction at the immediate border. The remaining 88 percent of these 1,1,07 border miles were classified as “managed,” in that interdictions may be achieved after illegal entry by multi-tiered enforcement operations. Across the 20 Border Patrol sectors on the national borders, Yuma sector on the southwest border reported achieving operational control for all of its border miles as of the end of fiscal year 2010. In contrast, the other 19 sectors reported achieving operational control ranging from 0 to 86 percent of their border miles (see fig. 2). Border Patrol officials attributed the uneven progress across sectors to multiple factors, including a need to prioritize resource deployment to sectors deemed to have greater risk of illegal activity as well as terrain and transportation infrastructure on both sides of the border. Our analysis of the remaining 7,500 national border miles that Border Patrol reported as not under operational control at the end of fiscal year 2010 showed that nearly two-thirds of these border miles were considered at the level of “low-level monitored,” meaning that some knowledge was available to develop a rudimentary border control strategy, but border security was vulnerable due to limited resources or inaccessibility (see fig. 3). The approximate one-third of these border miles remaining at the higher “monitored” level were judged to have substantial detection resources in place, but accessibility and resources continue to affect Border Patrol’s ability to respond. Border Patrol reported that these two levels of control were not acceptable for border security. No border miles were classified at the lowest level of “remote/low activity” as a result of insufficient information to develop a meaningful border control strategy. DHS transitioned from using operational control as its goal and outcome measure for border security in its Fiscal Year 2010-2012 Annual Performance Report, which since September 30, 2010, has reduced information provided to Congress and the public on program results. Citing a need to establish a new border security goal and measure that reflect a more quantitative methodology as well as the department’s evolving vision for border control, DHS established an interim performance measure until a new border control goal and measure could be developed. As we previously testified, this interim GPRA measure— the number of apprehensions on the southwest border between the ports of entry (POE)—is an output measure, which, while providing useful information on activity levels, does not inform on program results and therefore could reduce oversight and DHS accountability. Studies commissioned by CBP have documented that the number of apprehensions bears little relationship to effectiveness because agency officials do not compare these numbers to the amount of illegal activity that crosses the border. CBP officials told us they would continue to use interim measures for GPRA reporting purposes until new outcome measures are implemented; as of April 2012 CBP officials did not have an estimated implementation date for a new border security goal and measure. DHS stated that it had three efforts underway to improve the measures used to assess its programs and activities to secure the border. However, as these measures have not yet been implemented, it is too early to assess them and determine how they will be used to provide oversight of border security efforts. One of two efforts, led by CBP with assistance from the Homeland Security Institute (HSI), is to develop a Border Condition Index (BCI) that is intended to be a new outcome-based measure that will be used to publicly report progress in meeting a new border security goal in support of GPRA. The BCI methodology would consider various factors, such as the percentage of illegal entries apprehended and community well-being. CBP is in the process of finalizing the BCI measure and did not provide us with a time frame for its implementation. The second CBP effort is to create a measure of the change in illegal flow of persons across the southwest border using a statistical model developed by HSI, which uses data on apprehensions and recidivism rates for persons illegally crossing the border. DHS officials said that they had not yet determined whether results from this model would be used for GPRA reporting in the Fiscal Year 2012 DHS Annual Performance Plan, or for internal management purposes and reported to Congress in support of the annual budget request. The third effort, led by Border Patrol, is to standardize and strengthen the metrics that had formerly supported the measure of “border miles under effective (operational) control” that DHS removed as a GPRA goal and measure beginning in fiscal year 2011. As of April 2012, Border Patrol headquarters officials were working to develop border security goals and measures, but did not yet have a target time frame for implementation. While these new metrics are in development, Border Patrol operational assessments from fiscal years 2010 and 2012 show that field agents continued to use a different and evolving mix of performance indicators across Border Patrol sectors to inform the status of border security. These performance indicators generally included a mix of enforcement measures related to changes in the number of estimated known illegal entries and apprehensions, as well as changes in third-party indicators such as crime rates in border communities. Border Patrol officials said that the differences in the mix of performance indicators across sectors and time reflected differences in sector officials’ judgment of what indicators best reflect border security, given each sector’s unique circumstance. Border Patrol headquarters officials said that they were moving to standardize the indicators used by sectors on each border but did not yet have a time frame for completing this effort. Chairwoman Miller and Ranking Member Cuellar this completes my prepared statement. I would be happy to respond to any questions you or the members of the subcommittee may have. For questions about this statement, please contact Rebecca Gambler at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement included David Alexander, Cindy Ayers, Charles Bausell, Jr., Frances Cook, Michele Fejfar, Barbara Guffy, Brian Lipman, Jessica Orr, and Susan Sachs. We have previously reported on desirable characteristics of effective security strategies through our prior work on national security planning. These six characteristics and their elements could assist Border Patrol in its efforts to ensure that the 2012-2016 Border Patrol Strategic Plan (2012-2016 Strategic Plan) is an effective mechanism for achieving results. Purpose, scope and methodology. This characteristic addresses why the strategy was produced, the scope of its coverage, and the process by which it was developed. Border Patrol could discuss the specific impetus that led to the new strategic plan, for example, a terrorist event or changes in the external environment such as decreases in illegal activity or changes in organizational makeup such as significant increases in resources and capabilities. In addition to describing what the strategy is meant to do and the major functions, mission areas, or activities it covers, a national strategy would address its methodology, such as which organizations drafted or provided input to the document. For example, Border Patrol could identify parties or stakeholders who were consulted in the development of the strategy, such as federal law enforcement partners, relevant state and local agencies, and tribal organizations. Problem definition and risk assessment. This characteristic addresses the particular national problems and threats the strategy is directed towards. Border Patrol could develop a detailed discussion of primary threats—such as the illegal flow of migrants, smugglers, and other criminals or persons linked with terrorism across the border—as well as their causes and operating environment. This characteristic also entails a risk assessment, including an analysis of the threat to, and vulnerabilities of, critical assets and operations. Border Patrol could ensure that the strategic plan is informed by a national risk assessment that includes a comprehensive examination of threats and vulnerabilities across all U.S. borders, to include key infrastructures and assets. A discussion of the quality of data available for this assessment, such as known constraints or deficiencies in key data on estimated volume of persons illegally crossing the border, could also be pertinent. Goals, subordinate objectives, activities, and performance measures. This characteristic addresses what the strategy is trying to achieve, steps to achieve those results, and priorities, milestones, and performance measures to gauge results. For example, Border Patrol could identify what the strategic plan is attempting to achieve—a specific end state such as securing the nation’s borders—and identify and prioritize the specific steps and activities needed to achieve that end state, such as prioritizing the resourcing of sectors and stations in high-risk border areas. Identifying milestones and performance measures for achieving results according to specific time frames could help to ensure effective oversight and accountability. Border Patrol could, for example, identify milestones for developing an implementation plan, with time frames, which would guide the execution of the strategy and ensure that key steps such as completing a comprehensive risk assessment or developing appropriate outcome measures are achieved. This characteristic also emphasizes the importance of establishing outcome-related performance measures that link back to goals and objectives. For example, Border Patrol could develop outcome measures that show to what extent it has met its goal for securing the nation’s borders. Resources, investments, and risk management. This characteristic addresses what the strategy will cost, the sources and types of resources and investments needed, and where resources and investments should be targeted based on balancing risk reductions with costs. A national strategy could include criteria and appropriate mechanisms to allocate resources based on identified needs. Border Patrol could develop information on the costs of fully implementing the strategic plan, as well as a comprehensive baseline of resources and investments needed by sectors and stations to achieve the mission of securing the nation’s borders. According to our previous work, risk management focuses security efforts on those activities that bring about the greatest reduction in risk given the resources used. The strategic plan could elaborate on the risk assessment mentioned previously and provide guidance on how to manage resources and investments. Organizational roles, responsibilities, and coordination. This characteristic addresses who will be implementing the strategy, what their roles will be compared to others, and mechanisms for them to coordinate their efforts. A strategy could clarify organizations’ relationships in terms of partnering and might also identify specific processes for coordination between entities. For example, Border Patrol could build upon relations with federal, state, local, and tribal law enforcement organizations by further clarifying how these relationships can be organized to further leverage resources. Integration and implementation. This characteristic addresses how a national strategy relates to other strategies’ goals, objectives, and activities, and to subordinate levels of government and their plans to implement the strategy. For example, a national strategy could discuss how its scope complements, expands upon, or overlaps with other national strategies. Border Patrol could ensure that its 2012- 2016 Strategic Plan explains how it complements the strategies of other CBP agencies, such as the Office of Air and Marine and the Office of Field Operations, which oversees the nation’s ports of entry, as well as U.S. Customs and Border Protection’s overall strategy. Under the Government Performance and Results Act (GPRA), Border Patrol performance measures should be developed in the context of the Department of Homeland Security (DHS) mission and objectives for securing the U.S. border. In its Annual Performance Report for fiscal years 2010-2012, DHS discussed border security under Mission 2: Securing and Managing Our Borders. Under this mission, there were interim Border Patrol performance measures supporting Goal 2.1: Secure U.S. Air, Land, and Sea Borders, defined as preventing the illegal flow of people and goods across U.S. air, land, and sea borders. There were two objectives supporting this goal: Objective 2.1.1 Prevent illegal entry of people, weapons, dangerous goods and contraband, and protect against cross-border threats to health, the environment, and agriculture, while facilitating the safe flow of lawful travel and commerce. Objective 2.1.2 Prevent illegal export and exit of weapons, proceeds of crime, and other dangerous goods, and the exit of malicious actors. We have previously reported on key attributes of successful performance measures consistent with GPRA. Some of these attributes suggest that U.S. Customs and Border Protection (CBP) and Border Patrol consider the following in efforts to develop and standardize performance indicators and metrics: Measures should cover the core program activities that Border Patrol is expected to perform. At the broadest level, the DHS goal suggests measuring Border Patrol outcomes for preventing the illegal flow of people across the border between the ports of entry, as well as the illegal flow of goods. Border Patrol metrics comparing estimated illegal entries to apprehensions could serve to show how its efforts contribute to stemming the illegal flow of people across the border. As of April 2012, Border Patrol did not have a metric for performance related to stemming the illegal flow of goods, such as drugs, between the ports of entry in support of the border security goal. Border Patrol headquarters officials stated that they were not likely to develop a measure, per se, on contraband seizures that would apply across all sectors. According to these officials, although the Border Patrol plays a vital role in seizing contraband at the borders, it views this role as part of the larger security function played by many different agencies at all government levels. Measures should be balanced to cover CBP and DHS priorities. Border Patrol could establish specific performance measures that support CBP and DHS priorities, such as those listed in the objectives supporting the overall DHS goal. For example, in measuring the ability to prevent the illegal flow of persons, Border Patrol, in consultation with CBP and DHS, could choose to separately measure the illegal flow of migrants, smugglers, and other criminals, or persons linked with terrorism, crossing the border between the ports of entry. Similarly, in measuring the ability to prevent the flow of dangerous goods, Border Patrol could choose to separately measure the flow of weapons, illegal drugs, or proceeds of crime, such as bulk cash. Border Patrol could also establish separate performance measures for its ability to prevent the entry and exit of persons and goods across the border. Measures should link and align with measures of other components and at successive levels of the organization. DHS could ensure that performance measures established by Border Patrol align with measures at the CBP and departmental level, as well as those established by other components that contribute toward the goal to secure our borders, such as Customs and Border Protection’s Office of Field Operations (OFO), which has responsibility for securing the border at the ports of entry. For example, Border Patrol metrics estimating the flow of illegal entries between the ports of entry aligns with OFO metrics to measure for the illegal flow of persons through the ports of entry, and metrics of both components could be aligned with an overall effort by CBP to measure the overall flow of persons illegally crossing the southwest border. DHS could also choose to establish a performance measure informing on the flow of persons into the United States who overstay their authorized period of admission or other means that could similarly link to the overall DHS estimate of persons illegally residing in the United States. Linking performance measures such as these across the organization informs on how well each program or activity is contributing toward the overall goal to prevent illegal entry of persons, reinforces accountability, and ensures that day-to-day activities contribute to the results the organization is trying to achieve. Measures should reflect governmentwide priorities, such as quality, timeliness, and cost of service. Border Patrol could establish performance measures that are consistent with any measures developed by CBP and DHS to reflect the time frames and cost efficiencies in securing the border across locations. For example, CBP and DHS could establish measures that reflect the overall cost or timeframe to secure the border as indicated by changes in the illegal flow of persons or goods relative to its investment across components and programs. At the Border Patrol level, such a measure could compare the relative cost efficiencies achieved across border locations that use a different mix of personnel, technology, or strategies to secure the border. Measures should have a numerical goal, be reasonably free from significant bias or manipulation, and be reliable in producing the same result under similar conditions. As of April 2012, Border Patrol was working to improve the quality of its border security measures to reflect a more quantitative methodology to estimate the number of illegal entries across the border compared to apprehensions, and other metrics. However, Border Patrol officials said that comparable performance measures should not be applied to the northern or coastal borders, providing an inconsistent picture of security for the majority of U.S. border miles. We reported that in circumstances where complete information is not available to measure performance outcomes, agencies could use intermediate goals and measures to show progress or contribution to intended results. For example, Border Patrol could lack the detection capability necessary as a first step to estimate illegal entries across most of the northern border and some other border locations. In these circumstances, Border Patrol could choose to establish performance measures tracking progress in establishing this detection capability. Once Border Patrol achieves the ability to detect illegal activity across its borders, it could then transition to measures for reducing the flow of illegal activity and for interdiction. On the southwest border, Border Patrol could also choose to establish intermediate measures in reaching southwest border security goals. Such intermediate performance measures could include those that use Global Positioning System data for each apprehension to show Border Patrol progress in apprehending persons at or close to the border compared to enforcement tiers located miles away. Border Security: Observations on Costs, Benefits, and Challenges of a Department of Defense Role in Helping to Secure the Southwest Land Border. GAO-12-657T. Washington, D.C.: April 17, 2012. Border Security: Opportunities Exist to Ensure More Effective Use of DHS’s Air and Marine Assets. GAO-12-518. Washington, D.C.: March 30, 2012. Homeland Security: U.S. Customs and Border Protection’s Border Security Fencing, Infrastructure and Technology Fiscal Year 2011 Expenditure Plan. GAO-12-106R. Washington, D.C.: November 17, 2011. Arizona Border Surveillance Technology: More Information on Plans and Costs Is Needed before Proceeding. GAO-12-22. Washington, D.C.: November 4, 2011. Border Security: Observations on the Costs and Benefits of an Increased Department of Defense Role in Helping to Secure the Southwest Land Border. GAO-11-856R. Washington, D.C.: September 12, 2011. Border Security: Preliminary Observations on the Status of Key Southwest Border Technology Programs. GAO-11-448T. Washington, D.C.: March 15, 2011. Moving Illegal Proceeds: Opportunities Exist for Strengthening the Federal Government’s Efforts to Stem Cross-Border Currency Smuggling. GAO-11-407T. Washington, D.C.: March 9, 2011. Border Security: Preliminary Observations on Border Control Measures for the Southwest Border. GAO-11-374T. Washington, D.C.: February 15, 2011. Border Security: Enhanced DHS Oversight and Assessment of Interagency Coordination Is Needed for the Northern Border. GAO-11-97. Washington, D.C.: December 17, 2010. Border Security: Additional Actions Needed to Better Ensure a Coordinated Federal Response to Illegal Activity on Federal Lands. GAO-11-177. Washington, D.C.: November 18, 2010. Moving Illegal Proceeds: Challenges Exist in the Federal Government’s Effort to Stem Cross-Border Currency Smuggling. GAO-11-73. Washington, D.C.: October 25, 2010. Secure Border Initiative: DHS Needs to Strengthen Management and Oversight of Its Prime Contractor. GAO-11-6. Washington, D.C.: October 18, 2010. Homeland Security: US-VISIT Pilot Evaluations Offer Limited Understanding of Air Exit Options. GAO-10-860. Washington, D.C.: August 10, 2010. U.S. Customs and Border Protection: Border Security Fencing, Infrastructure and Technology Fiscal Year 2010 Expenditure Plan. GAO-10-877R. Washington, D.C.: July 30, 2010. Alien Smuggling: DHS Could Better Address Alien Smuggling along the Southwest Border by Leveraging Investigative Resources and Measuring Program Performance. GAO-10-919T. Washington, D.C.: July 22, 2010. National Security: Key Challenges and Solutions to Strengthen Interagency Collaboration. GAO-10-822T. Washington, D.C.: June 9, 2010. Border Security: Improvements in the Department of State’s Development Process Could Increase the Security of Passport Cards and Border Crossing Cards. GAO-10-589. Washington, D.C.: June 1, 2010. Alien Smuggling: DHS Needs to Better Leverage Investigative Resources and Measure Program Performance along the Southwest Border. GAO-10-328 (Washington, D.C.: May 24, 2010) Secure Border Initiative: DHS Needs to Reconsider Its Proposed Investment in Key Technology Program. GAO-10-340. Washington, D.C.: May 5, 2010. Secure Border Initiative: DHS Has Faced Challenges Deploying Technology and Fencing Along the Southwest Border. GAO-10-651T. Washington, D.C.: May 4, 2010. Information Sharing: Federal Agencies Are Sharing Border and Terrorism Information with Local and Tribal Law Enforcement, but Additional Efforts are Needed. GAO-10-41. Washington, D.C.: December 18, 2009. Homeland Security: Key US-VISIT Components at Varying Stages of Completion, but Integrated and Reliable Schedule Needed. GAO-10-13. Washington, D.C.: November 19, 2009. Interagency Collaboration: Key Issues for Congressional Oversight of National Security Strategies, Organizations, Workforce, and Information Sharing. GAO-09-904SP. Washington, D.C.: September 25, 2009. Secure Border Initiative: Technology Deployment Delays Persist and the Impact of Border Fencing Has Not Been Assessed. GAO-09-896. Washington, D.C.: September 9, 2009. Border Patrol: Checkpoints Contribute to Border Patrol’s Mission, but More Consistent Data Collection and Performance Measurement Could Improve Effectiveness. GAO-09-824. Washington, D.C.: August 31, 2009. Firearms Trafficking: U.S. Efforts to Combat Arms Trafficking to Mexico Face Planning and Coordination Challenges. GAO-09-709. Washington, D.C.: June 18, 2009. Northern Border Security: DHS’s Report Could Better Inform Congress by Identifying Actions, Resources, and Time Frames Needed to Address Vulnerabilities. GAO-09-93. Washington, D.C.: November 25, 2008. Secure Border Initiative: DHS Needs to Address Significant Risks in Delivering Key Technology Investments. GAO-08-1086. Washington, D.C.: September 22, 2008. Secure Border Initiative: Observations on Deployment Challenges. GAO-08-1141T. Washington, D.C.: September 10, 2008. Secure Border Initiative: Observations on the Importance of Applying Lessons Learned to Future Projects. GAO-08-508T. Washington, D.C.: February 27, 2008. Border Security: Despite Progress, Weaknesses in Traveler Inspections Exist at Our Nation’s Ports of Entry. GAO-08-329T. Washington, D.C.: January 3, 2008. Border Security: Despite Progress, Weaknesses in Traveler Inspections Exist at Our Nation’s Ports of Entry. GAO-08-219. Washington: D.C.: November 5, 2007. Secure Border Initiative: Observations on Selected Aspects of SBInet Program Implementation. GAO-08-131T. Washington, D.C.: October 24, 2007. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Border Patrol, within DHS’s CBP, is the federal agency with primary responsibility for securing the national borders between the U.S. ports of entry (POE). DHS has completed a new 2012-2016 Border Patrol Strategic Plan (2012-2016 Strategic Plan) that Border Patrol officials stated will emphasize risk management instead of increased resources to achieve border security and continue to build on the foundation of the 2004 National Border Patrol Strategy (2004 Strategy). This statement highlights key issues from prior GAO reports that discuss Border Patrol’s progress and challenges in (1) implementing key elements of the 2004 Strategy and (2) achieving the 2004 strategic goal to gain operational control of the border. This statement is based on GAO reports issued since 2007 on border security, with selected updates from April and May 2012 on Border Patrol resource needs, actions taken to address prior GAO recommendations, and efforts to develop performance measures. To conduct these updates, GAO reviewed agency documents such as operational assessments and interviewed DHS officials. GAO’s prior work has highlighted progress and challenges in various areas related to Border Patrol’s implementation of its 2004 National Strategy, which could provide insights as Border Patrol transitions to its 2012 Strategic Plan. Border Patrol officials stated that the 2012 Strategic Plan will rely on Border Patrol and federal, state, local, tribal, and international partners working together to use a risk-based approach to secure the border, and include the key elements of “Information, Integration, and Rapid Response” to achieve objectives. These elements were similar to those in the 2004 Strategy and GAO’s past work highlighted the progress and challenges the agency faced obtaining information necessary for border security; integrating security operations with partners; and mobilizing a rapid response to security threats. Border Patrol successfully used interagency forums and joint operations to counter threats, but challenges included assessing the benefits of border technology and infrastructure to, among other things, provide information on situational awareness. For example, in May 2010 GAO reported that the Department of Homeland Security’s (DHS) U.S. Customs and Border Protection (CBP) had not accounted for the effect of its investment in border fencing and infrastructure on security. GAO recommended that CBP conduct an analysis of the effect of tactical infrastructure on border security, with which CBP concurred. Further, GAO identified challenges in DHS efforts to coordinate with partners that help to secure the border. For example, in December 2010 GAO reported that various northern border security partners cited ongoing challenges sharing information and resources for border security operations and investigations, and that DHS did not have mechanisms for providing oversight. GAO recommended that DHS provide oversight, to which DHS concurred and stated that in January 2012 the department established an intercomponent Advisory Council to provide oversight of compliance with interagency agreements. GAO’s prior work showed that as of September 30, 2010, Border Patrol reported achieving its 2004 goal of operational control—where Border Patrol has the ability to detect and interdict illegal activity—for 1,107 (13 percent) of 8,607 miles across U.S. northern, southwest, and coastal borders. DHS transitioned at the end of fiscal year 2010 from using operational control as its goal and outcome measure for border security to using an interim measure of apprehensions on the southwest border. DHS reported that this interim measure would be used until such time as DHS developed a new goal and measure for border security that will reflect a more quantitative methodology across border locations and the agency’s evolving view of border security. As GAO previously testified, this interim measure, while providing useful information on activity levels, is an output measure that does not inform on program results. Therefore, it limits oversight and accountability and has reduced information provided to Congress and the public on program results. DHS stated that it had several efforts underway to establish a new measure used to assess efforts to secure the border but as this measure is under development, it is too early to assess it. In prior reports, GAO made recommendations to, among other things, strengthen border security technology, infrastructure, and partnerships. DHS concurred with the recommendations and has reported actions planned or underway to address them. CBP reviewed a draft of information contained in this statement and provided comments that GAO incorporated as appropriate.
ICE has designed some management controls to govern 287(g) program implementation, such as MOAs with participating agencies that identify the roles and responsibilities of each party, background checks of officers applying to participate in the program, and a 4-week training course with mandatory course examinations for participating officers. However, the program lacks several other key controls. For example Program Objectives: While ICE officials have stated that the main objective of the 287(g) program is to enhance the safety and security of communities by addressing serious criminal activity committed by removable aliens, they have not documented this objective in program- related materials consistent with internal control standards. As a result, some participating agencies are using their 287(g) authority to process for removal aliens who have committed minor offenses, such as speeding, carrying an open container of alcohol, and urinating in public. None of these crimes fall into the category of serious criminal activity that ICE officials described to us as the type of crime the 287(g) program is expected to pursue. While participating agencies are not prohibited from seeking the assistance of ICE for aliens arrested for minor offenses, if all the participating agencies sought assistance to remove aliens for such minor offenses, ICE would not have detention space to detain all of the aliens referred to them. ICE’s Office of Detention and Removal strategic plan calls for using the limited detention bed space available for those aliens that pose the greatest threat to the public until more alternative detention methods are available. Use of Program Authority: ICE has not consistently articulated in program-related documents how participating agencies are to use their 287(g) authority. For example, according to ICE officials and other ICE documentation, 287(g) authority is to be used in connection with an arrest for a state offense; however, the signed agreement that lays out the 287(g) authority for participating agencies does not address when the authority is to be used. While all 29 MOAs we reviewed contained language that authorizes a state or local officer to interrogate any person believed to be an alien as to his right to be or remain in the United States, none of them mentioned that an arrest should precede use of 287(g) program authority. Furthermore, the processing of individuals for possible removal is to be in connection with a conviction of a state or federal felony offense. However, this circumstance is not mentioned in 7 of the 29 MOAs we reviewed, resulting in implementation guidance that is not consistent across the 29 participating agencies. A potential consequence of not having documented program objectives is misuse of authority. Internal control standards state that government programs should ensure that significant events are authorized and executed only by persons acting within the scope of their authority. Defining and consistently communicating how this authority is to be used would help ICE ensure that immigration enforcement activities undertaken by participating agencies are in accordance with ICE policies and program objectives. Supervision of Participating Agencies: Although the law requires that state and local officials use 287(g) authority under the supervision of ICE officials, ICE has not described in internal or external guidance the nature and extent of supervision it is to exercise over participating agencies’ implementation of the program. This has led to wide variation in the perception of the nature and extent of supervisory responsibility among ICE field officials and officials from 23 of the 29 participating agencies that had implemented the program and provided information to us on ICE supervision. For example, one ICE official said ICE provides no direct supervision over the local law enforcement officers in the 287(g) program in their area of responsibility. Conversely, another ICE official characterized ICE supervisors as providing frontline support for the 287(g) program. ICE officials at two additional offices described their supervisory activities as overseeing training and ensuring that computer systems are working properly. ICE officials at another field office described their supervisory activities as reviewing files for completeness and accuracy. Officials from 14 of the 23 agencies that had implemented the program were pleased with ICE’s supervision of the 287(g) trained officers. Officials from another four law enforcement agencies characterized ICE’s supervision as fair, adequate, or provided on an as-needed basis. Officials from three agencies said they did not receive direct ICE supervision or that supervision was not provided daily, which an official from one of these agencies felt was necessary to assist with the constant changes in requirements for processing of paperwork. Officials from two law enforcement agencies said ICE supervisors were either unresponsive or not available. ICE officials in headquarters noted that the level of ICE supervision provided to participating agencies has varied due to a shortage of supervisory resources. Internal control standards require an agency’s organizational structure to define key areas of authority and responsibility. Given the rapid growth of the program, defining the nature and extent of ICE’s supervision would strengthen ICE’s assurance that management’s directives are being carried out. Tracking and Reporting Data: MOAs that were signed before 2007 did not contain a requirement to track and report data on program implementation. For the MOAs signed in 2007 and after, ICE included a provision stating that participating agencies are responsible for tracking and reporting data to ICE. However, in these MOAs, ICE did not define what data should be tracked or how it should be collected and reported. Of the 29 jurisdictions we reviewed, 9 MOAs were signed prior to 2007 and 20 were signed in 2007 or later. Regardless of when the MOAs were signed, our interviews with officials from the 29 participating jurisdictions indicated confusion regarding whether they had a data tracking and reporting requirement, what type of data should be tracked and reported, and what format they should use in reporting data to ICE. Internal control standards call for pertinent information to be recorded and communicated to management in a form and within a time frame that enables management to carry out internal control and other responsibilities. Communicating to participating agencies what data is to be collected and how it should be gathered and reported would help ensure that ICE management has the information needed to determine whether the program is achieving its objectives. Performance Measures: ICE has not developed performance measures for the 287(g) program to track and evaluate the progress toward attaining the program’s objectives. GPRA requires that agencies clearly define their missions, measure their performance against the goals they have set, and report on how well they are doing in attaining those goals. Measuring performance allows organizations to track the progress they are making toward their goals and gives managers critical information on which to base decisions for improving their programs. ICE officials stated that they are in the process of developing performance measures, but have not provided any documentation or a time frame for when they expect to complete the development of these measures. ICE officials also stated that developing measures for the program will be difficult because each state and local partnership agreement is unique, making it challenging to develop measures that would be applicable for all participating agencies. Nonetheless, standard practices for program and project management call for specific desired outcomes or results to be conceptualized and defined in the planning process as part of a road map, along with the appropriate projects needed to achieve those results and milestones. Without a plan for the development of performance measures, including milestones for their completion, ICE lacks a roadmap for how this project will be achieved. ICE and participating agencies used program resources mainly for personnel, training, and equipment, and participating agencies reported activities, benefits, and concerns stemming from the program. For fiscal years 2006 through 2008, ICE received about $60 million to provide training, supervision, computers, and other equipment for participating agencies. State and local participants provided officers, office space, and other expenses not reimbursed by ICE, such as office supplies and vehicles. ICE and state and local participating agencies cite a range of benefits associated with the 287(g) partnership. For example, as of February 2009, ICE reported enrolling 67 agencies and training 951 state and local law enforcement officers. At that time, ICE had 42 additional requests for participation in the 287(g) program, and 6 of the 42 have been approved pending approval of an MOA. According to data provided by ICE for 25 of the 29 program participants we reviewed, during fiscal year 2008, about 43,000 aliens had been arrested pursuant to the program. Based on the data provided, individual agency participant results ranged from about 13,000 arrests in one location, to no arrests in two locations. Of those 43,000 aliens arrested pursuant to the 287(g) authority, ICE detained about 34,000, placed about 14,000 of those detained (41 percent) in removal proceedings, and arranged for about 15,000 of those detained (44 percent) to be voluntarily removed. The remaining 5,000 (15 percent) arrested aliens detained by ICE were either given a humanitarian release, sent to a federal or state prison to serve a sentence for a felony offense, or not taken into ICE custody given the minor nature of the underlying offense and limited availability of the federal government’s detention space. Participating agencies cited benefits of the program including a reduction in crime and the removal of repeat offenders. However, more than half of the 29 state and local law enforcement agencies we reviewed reported concerns community members expressed about the 287(g) program, including concerns that law enforcement officers in the 287(g) program would be deporting removable aliens pursuant to minor traffic violations (e.g., speeding) and concerns about racial profiling. We made several recommendations to strengthen internal controls for the 287(g) program to help ensure the program operates as intended. Specifically, we recommended that ICE (1) document the objective of the 287(g) program for participants, (2) clarify when the 287(g) authority is authorized for use by state and local law enforcement officers, (3) document the nature and extent of supervisory activities ICE officers are expected to carry out as part of their responsibilities in overseeing the implementation of the 287(g) program, (4) specify the program information or data that each agency is expected to collect regarding their implementation of the 287(g) program and how this information is to be reported, and (5) establish a plan, including a time frame, for the development of performance measures for the 287(g) program. DHS concurred with each of our recommendations and reported plans and steps taken to address them. Mr. Chairman and Members of the Committee, this concludes my statement. I would be pleased to respond to any questions you or other Members of the Committee may have. For questions about this statement, please contact Richard Stana at 202- 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Bill Crocker, Lori Kmetz, Susanna Kuebler, and Adam Vogt. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses the Department of Homeland Security's (DHS) U.S. Immigration and Customs Enforcement's (ICE) management of the 287(g) program. Recent reports indicate that the total population of unauthorized aliens residing in the United States is about 12 million. Some of these aliens have committed one or more crimes, although the exact number of aliens that have committed crimes is unknown. Some crimes are serious and pose a threat to the security and safety of communities. ICE does not have the agents or the detention space that would be required to address all criminal activity committed by unauthorized aliens. Thus, state and local law enforcement officers play a critical role in protecting our homeland because, during the course of their daily duties, they may encounter foreign-national criminals and immigration violators who pose a threat to national security or public safety. On September 30, 1996, the Illegal Immigration Reform and Immigrant Responsibility Act was enacted and added section 287(g) to the Immigration and Nationality Act. This section authorizes the federal government to enter into agreements with state and local law enforcement agencies, and to train selected state and local officers to perform certain functions of an immigration officer--under the supervision of ICE officers--including searching selected federal databases and conducting interviews to assist in the identification of those individuals in the country illegally. The first such agreement under the statute was signed in 2002, and as of February 2009, 67 state and local agencies were participating in this program. The testimony today is based on our January 30, 2009, report regarding the program including selected updates made in February 2009. Like the report, this statement addresses (1) the extent to which Immigration and Customs Enforcement has designed controls to govern 287(g) program implementation and (2) how program resources are being used and the activities, benefits, and concerns reported by participating agencies. To do this work, we interviewed officials from both ICE and participating agencies regarding program implementation, resources, and results. We also reviewed memorandums of agreement (MOA) between ICE and the 29 law enforcement agencies participating in the program as of September 1, 2007, that are intended to outline the activities, resources, authorities, and reports expected of each agency. We also compared the controls ICE designed to govern implementation of the 287(g) program with criteria in GAO's Standards for Internal Control in the Federal Government, the Government Performance and Results Act (GPRA), and the Project Management Institute's Standard for Program Management. More detailed information on our scope and methodology appears in the January 30, 2009 report. In February 2009, we also obtained updated information from ICE regarding the number of law enforcement agencies participating in the 287(g) program as well as the number of additional law enforcement agencies being considered for participation in the program. We conducted our work in accordance with generally accepted government auditing standards. In summary, ICE has designed some management controls, such as MOAs with participating agencies and background checks of officers applying to participate in the program, to govern 287(g) program implementation. However, the program lacks other key internal controls. Specifically, program objectives have not been documented in any program-related materials, guidance on how and when to use program authority is inconsistent, guidance on how ICE officials are to supervise officers from participating agencies has not been developed, data that participating agencies are to track and report to ICE has not been defined, and performance measures to track and evaluate progress toward meeting program objectives have not been developed. Taken together, the lack of internal controls makes it difficult for ICE to ensure that the program is operating as intended. ICE and participating agencies used program resources mainly for personnel, training, and equipment, and participating agencies reported activities and benefits, such as a reduction in crime and the removal of repeat offenders. However, officials from more than half of the 29 state and local law enforcement agencies we reviewed reported concerns members of their communities expressed about the use of 287(g) authority for minor violations and/or about racial profiling. We made several recommendations to strengthen internal controls for the 287(g) program to help ensure that the program operates as intended. DHS concurred with our recommendations and reported plans and steps taken to address them.
For nearly 40 years, the Rocky Flats site, located about 16 miles northwest of downtown Denver, served as a nuclear weapons production facility, and it now bears the scars of that role. Soil, groundwater, and surface water at the site, as well as many of the buildings, are contaminated with radioactive materials, such as plutonium and uranium; toxic metals, such as beryllium; and hazardous chemicals, such as cleaning solvents and degreasers. Accordingly, the site is now one of the Department’s priorities for environmental cleanup. While most of the approximately 6,300 acres that make up the Rocky Flats site served through the years as an undeveloped buffer zone, about one-half of a square mile (385 acres) in the center of the site constituted the industrial area, where, for decades, plutonium was recycled and shaped into pits for use in nuclear weapons. About three-fourths of the site’s more than 800 original structures (buildings, guard towers, storage tanks) were not radiologically or chemically contaminated by site operations over the years, but the remainder were—some severely so. This was the case, for example, for seven building complexes that housed the plutonium-processing operations. The cleanup and closure of Rocky Flats is a complex, tedious, and labor- intensive undertaking. Because plutonium-contaminated materials must be specially contained and carefully handled, the work is hard and slow. Plutonium is dangerous to human health, even in minute quantities, especially if inhaled. Workers dealing with plutonium-contaminated materials and equipment must wear cumbersome protective suits with enclosed respiratory systems and sometimes must wield heavy and ungainly tools. The equipment being worked on must also be enclosed within plastic or glass to prevent airborne contaminants from reaching unprotected workers or surfaces. Figure 1 shows workers in protective clothing dealing with contaminated materials. Within DOE, the Office of Environmental Management is responsible for cleaning up the Department’s nuclear weapons complex and closing down facilities, including Rocky Flats, that are no longer needed for producing nuclear weapons. At the Rocky Flats Field Office, approximately 190 DOE employees oversee the contractor’s activities. In July 1995, Kaiser-Hill was awarded a 5-year contract to begin cleaning up Rocky Flats. When we reported in April 1999 on the status of the cleanup project, Kaiser-Hill’s target date to close Rocky Flats was 2010. In response to a 1996 DOE initiative to close as many sites as possible by 2006, DOE entered into negotiations with Kaiser-Hill that resulted in the current closure contract, which took effect February 1, 2000. Kaiser-Hill manages the cleanup work, which is done predominantly by subcontractors. As required by the contract, Kaiser-Hill has developed a closure project baseline, which serves as its detailed management plan for the project. The closure contract specifies both Kaiser-Hill’s and DOE’s responsibilities. Kaiser-Hill is responsible for processing, packaging, and shipping off-site all of Rocky Flats’ nuclear materials and radioactive and hazardous wastes; cleaning up and demolishing more than 700 structures that remained on-site in February 2000; and cleaning up the site’s contaminated soil and groundwater. DOE is required to deliver a variety of services and items to support the project. Essentially, the contract requires DOE to arrange receiver sites for all the materials and wastes that must be shipped off-site and to obtain the necessary certifications for the containers in which the materials and wastes must be packed and shipped. Many DOE sites will play a significant role in Rocky Flats’ cleanup and closure, especially those sites that are scheduled to receive materials or wastes from Rocky Flats, such as the Savannah River Site in South Carolina and the Waste Isolation Pilot Plant (WIPP) in New Mexico. The closure contract is structured so that DOE pays all of the cleanup costs plus an incentive fee for Kaiser-Hill’s services. Kaiser-Hill will not receive the majority of this fee until it has finished most of the cleanup and closure tasks, as specified in the contract. Kaiser-Hill will earn a higher incentive fee if it saves on costs and finishes its work before the target completion date. In effect, DOE will share the savings from these lower costs by paying Kaiser-Hill a higher fee. Conversely, if Kaiser-Hill exceeds the contract’s target date and cost, resulting in higher costs to the government, the contractor will earn a lower fee. The contract also requires Kaiser-Hill to comply with the terms of the Rocky Flats Cleanup Agreement, which serves as the regulatory framework for the site’s cleanup and closure. The agreement specifies the roles and responsibilities of DOE and the two primary regulators for Rocky Flats: the Environmental Protection Agency (EPA) and the state of Colorado’s Department of Public Health and Environment. EPA derives its regulatory authority primarily from the Comprehensive Environmental Response, Compensation, and Liability Act of 1980, as amended, commonly known as Superfund. Colorado exercises regulatory authority over hazardous wastes under the Resource Conservation and Recovery Act of 1976, as amended, as well as the Colorado Hazardous Waste Act. Pursuant to the cleanup agreement, EPA has the lead regulatory authority over the cleanup of the site’s buffer zone, while Colorado has the lead authority over the cleanup of the industrial area. The cleanup agreement incorporates the requirements of both Superfund and the Resource Conservation and Recovery Act, and requires that the site’s other stakeholders be consulted during the development of cleanup plans. These other stakeholders include the Defense Nuclear Facilities Safety Board;local governments; community, business, and citizen groups; and individuals. Under the terms of the contract, and as used in this report, “closure” is defined as the point in time at which Kaiser-Hill has completed all of its cleanup tasks, as specified in the contract. When Kaiser-Hill notifies DOE that it has completed its work, DOE has 90 days either to accept the project as complete or to provide a list of items that Kaiser-Hill must address. The contractor will then have 9 months to complete its work on these items. Separate from how “closure” is defined under the contract, however, is the process of removing Rocky Flats from the list of Superfund sites. When DOE and the regulators are satisfied that the cleanup meets all regulatory requirements, and sufficient monitoring information has been gathered on the condition of the air, water, and soil, EPA will have the information it needs to consider removing the site from the Superfund list. After closure has been achieved, however, monitoring and maintenance activities at the site will continue for many decades. Soil and water conditions will continue to be monitored to ensure that contamination remains within acceptable levels. Also, all treatment facilities, such as groundwater treatment systems, will continue to be maintained as long as necessary. DOE’s long-term cost estimates include the costs of monitoring and maintenance activities through 2070, but some of the activities will probably need to go on longer. By the end of fiscal year 2000, Kaiser-Hill had made significant strides in cleaning up the Rocky Flats site, but the vast majority of the work, and some of the most technically challenging, remained. The bulk of the work entails (1) processing, packaging, and shipping various forms of plutonium and uranium; (2) processing, packaging, and shipping radioactive wastes; (3) cleaning up and demolishing buildings and other structures; and (4) remediating contaminated water and soil. The total cleanup cost is estimated to be about $7.5 billion—in constant 2000 dollars—if the site’s closure occurs by December 15, 2006. The project’s total cost will grow, however, if additional work is required or if delays occur. After closure, costs will continue through at least 2070 for activities such as site monitoring and maintenance, and for contractor employee retirement benefits. These long-term costs will be about another $1.4 billion in constant 2000 dollars. Since it began cleanup operations in fiscal year 1996, the contractor has made considerable progress toward closure in several work categories. Progress has been greatest in two areas: shipping nuclear materials and remediating groundwater. In most major areas of work, however, the lion’s share remains to be done. Table 1 shows the status of the four major cleanup activities, at the end of fiscal year 2000. Kaiser-Hill has made significant progress in shipping the site’s special nuclear materials (plutonium and enriched uranium). When Kaiser-Hill began cleanup work under its previous contract (in fiscal year 1996), the site had over 16 metric tons of special nuclear materials, including various forms of plutonium (e.g., pits, other metal parts, and oxides) and enriched uranium. The contractor was responsible for stabilizing and packaging all of the special nuclear materials and shipping them off-site, primarily to other DOE sites, such as Oak Ridge (in Tennessee), Pantex (in Texas), and Savannah River (in South Carolina). By the end of fiscal year 2000, the contractor had inventoried the special nuclear materials and had prepared and shipped all of the plutonium pits and most of the highly enriched uranium. The contractor had also shipped some of the 6.6 metric tons of plutonium metals. Kaiser-Hill still has to stabilize, package, and ship off-site the remainder of the plutonium-contaminated highly enriched uranium, the remainder of the plutonium metals, and all 3.2 metric tons of plutonium oxides. According to a DOE official, shipments of plutonium metals began in the spring of 2000; shipments of oxides are expected to begin in June 2001. Kaiser-Hill plans to complete all shipments of metals and oxides by the end of fiscal year 2002. In the sequence of activities necessary to close the site, removal of the special nuclear materials logically precedes many of the other cleanup activities. Kaiser-Hill has made limited progress in processing, packaging, and shipping the various radioactive wastes, including mixed waste that also contains hazardous waste. The total amount of radioactive waste to be shipped includes waste that was already stored at the site and waste that is generated during cleanup activities. As of September 30, 2000, the contractor had exceeded its shipping goal by shipping off-site more than 34,500 cubic meters of low-level radioactive and low-level radioactive mixed wastes—about 13 percent of the total. In addition, by the end of fiscal year 2000, the contractor had shipped off-site about 320 cubic meters of transuranic and transuranic mixed wastes, or about 2 percent of the total. The contractor was well below its goal for shipping transuranic waste, having shipped only 25 percent of the amount it had projected to ship in fiscal year 2000. Because waste will continue to be generated by cleanup activities, waste shipments will continue through the life of the cleanup project. From fiscal year 2001 through fiscal year 2006, for example, the contractor expects to ship nearly 224,000 cubic meters of low-level radioactive waste and about 14,400 cubic meters of transuranic waste. In part, the contractor’s limited progress in shipping the transuranic waste is due to the late opening of WIPP—the Department’s repository for such waste. The WIPP facility did not open until late March 1999, and shipments of transuranic waste from Rocky Flats did not begin until June 1999. Kaiser-Hill has made headway on the vast amount of work involved in preparing buildings and other structures for demolition, and has demolished some structures. But the majority of both the preparatory work and the demolition lies ahead. From fiscal year 1996 through fiscal year 2000, the contractor dispositioned (i.e., disposed of, sold, or donated) hundreds of thousands of pieces of uncontaminated personal property; removed thousands of kilograms of plutonium and other nuclear materials from furnaces, pipes, and other locations within buildings; and drained and removed plutonium- or uranium-laden liquids or residues from process pipes and tanks. The contractor also dismantled plutonium- processing furnaces, stripped out contaminated process pipelines, and cut up and removed hundreds of contaminated glove boxes. How much of the total preparatory work has been accomplished is difficult to say, as the contractor does not measure that work separately. Nevertheless, a senior Kaiser-Hill official estimated that only about 10 percent of the total predemolition work had been completed as of September 30, 2000. Although many of the remaining tasks are similar to those already completed, others are structure-specific. For example, one building contains processing equipment that is two stories tall; another houses a huge plutonium storage vault that is the length of a football field and has 14-foot-thick concrete walls. In the six remaining plutonium-processing complexes alone, hundreds of miles of piping must be stripped out, and hundreds of contaminated glove boxes, furnaces, and other items must be cut into pieces small enough to fit into shipping containers for disposal. Kaiser-Hill is using innovative technology to clean up the plutonium buildings. For example, it is using a fine aerosol sugar fog to clean some of the most contaminated rooms at the site. The sugar fog—called Capture Coating™—is created by a machine using sound waves to make the droplets very small. The fog is then pumped into the room through a flexible duct. Airborne radioactive particles adhere to the fog, which settles onto the walls and floor and is allowed to dry. The contaminated surfaces can then be more safely removed. Another innovative approach, developed through experience Kaiser-Hill gained cleaning up the first plutonium building, is using a plasma arc torch instead of conventional tools to cut up large pieces of contaminated equipment. The plasma arc torch—a device that electrically heats gas to form a plasma for high- temperature operations, such as melting metal—is much faster, and it distances workers from sharp edges on tools and contaminated metal parts. To further enhance worker safety, Kaiser-Hill is pursuing the use of robotic arms to operate the torch. To improve the overall efficiency of cleanup and demolition activities, Kaiser-Hill is reducing the size of the site’s protected area—a restricted zone within which special nuclear materials are kept under access and security controls. Maintaining a protected area is expensive, requiring the presence of extensive security equipment and armed guards. In addition, a large protected area limits the time that workers can devote to cleanup activities because only those with the necessary security clearances can enter the protected area unescorted, and the entry and exit processes are time-consuming. The protected area now includes all of the plutonium buildings. By consolidating special nuclear materials and processes into one building, Kaiser-Hill plans to reduce the protected area to about one- fifth its current size in early 2001, saving an estimated $10 million per year in security costs from then through closure, as well as achieving productivity improvements. Kaiser-Hill plans to apply any such cost savings to other cleanup work at the site, in accordance with direction provided in the conference report on DOE’s fiscal year 2001 appropriations. Nearly all of the demolition work lies ahead. By the end of fiscal year 2000, Kaiser-Hill had demolished 81 structures encompassing about 196,000 square feet. That equates to about 10 percent of the total number of structures (802) that existed at the site when cleanup began but only about 5 percent of the total square footage. Although many of the 81 structures demolished so far are relatively minor (i.e., small or uncontaminated), others represent major accomplishments for the contractor. For example, Kaiser-Hill demolished one of the building complexes that, early in the production era, housed plutonium-processing activities. This building complex encompassed 13 structures and more than 75,000 square feet of enclosed space. Its demolition, in fiscal year 2000, was the first in the nation of a plutonium facility of that size and complexity. Remaining to be demolished (after completion of the necessary preparatory activities) at the end of fiscal year 2000 were 721 structures, encompassing about 3.4 million square feet. Most of this demolition is scheduled to occur during the last 2 years of the project. Figure 2 shows, by severity of contamination, the structures that remained to be demolished as of September 30, 2000, and the ones already demolished. And finally, the bulk of the environmental remediation remains to be done, much of it also in the last 2 years of the project. Environmental remediation activities at the site are designed to clean up contaminated groundwater, surface water, and soil. Some contaminated groundwater seeps to the surface, particularly during periods of rain or snow, and then trickles into ditches and streams. Similarly, contaminated soil washes into ditches and streams when it rains or snows. Accordingly, the remediation of both the groundwater and the soil is designed to protect not only those elements but also the surface water. When surface water leaves the site— via ditches and streams—it must be safe for all purposes, including drinking water. Currently, the site’s runoff water is collected in holding ponds and tested prior to its release to ensure that radioactive materials do not leave the site in surface water. By the end of fiscal year 2000, Kaiser-Hill had installed three of the four planned groundwater treatment systems. Each system intercepts a contaminated plume of groundwater before it can surface and funnels the plume through treatment cells that remove or reduce the contaminants. At least one more treatment system is planned--for the plume underlying the industrial area, pending an investigation of the source, type, and severity of the plume’s contaminants. But Kaiser-Hill plans no remediation of the site’s other seven contaminated plumes because it believes they are stationary under the site. According to the Rocky Flats Cleanup Agreement, stationary groundwater plumes that do not present a risk to surface water require no remediation, regardless of contamination levels. Long-term monitoring of the seven plumes will be necessary to ensure that they remain stationary. As for the remediation of contaminated soil, most of it remains to be done. Through the end of fiscal year 2000, Kaiser-Hill had excavated or treated several areas of soil contamination that were ranked as high priorities for remediation because of their potential risk to human health or the environment. For example, the contractor excavated or treated soil contaminated by past spills or leaks of radioactive or hazardous materials. When Kaiser-Hill began its cleanup efforts at the site in fiscal year 1996, it was responsible for 308 areas of potential soil contamination. Kaiser-Hill is responsible for determining the levels of contamination present and, thus, which of these areas require remedial action pursuant to the requirements of the Rocky Flats Cleanup Agreement. At the time of our review, much of the characterization remained to be done, particularly under the buildings in the industrial area. As a result, the depth and extent of soil contamination—particularly in the industrial area—was unknown. Relying on preliminary investigations and site records, Kaiser-Hill thought it would need to remediate 124 areas (of the 308) and to take no action (or no further action) on the other 184 areas. Once it has finished characterizing the soil in the industrial area, however, Kaiser-Hill’s remediation plans may change. DOE and the regulators (EPA and Colorado) must approve not only each remedial action that Kaiser-Hill takes but also each proposal to take no remedial action on an area. At the time of our review, Kaiser-Hill had completed remedial actions on 25 of the 124 areas thought to require remediation. Of the 25 remedial actions, 3 had been approved by the regulators; the other 22 were awaiting approval. Remediation of the other 99 areas remained to be done. Most of the remaining soil remediation is scheduled to occur toward the end of the project, to coincide with or follow demolition activities. As for the 184 areas thought to require no remediation, Kaiser-Hill had submitted 111 no- action proposals. Of these, 45 had been approved by the regulators; the other 66 were awaiting approval. Proposals had not been submitted on the other 73 areas. The immense task of cleaning up and closing Rocky Flats will cost about $7.5 billion from fiscal year 1996 through the target closure date, plus about $1.4 billion in post-closure costs through 2070. These costs, however, could increase substantially, for various reasons. The $7.5 billion estimate of costs through the target closure date is made up of four components: The current contract, effective February 2000. This contract represents more than half the total cost. If closure occurs by the target date, the contract cost will be about $4 billion. DOE will pay all costs that it determines are allowable under the terms of the contract. Kaiser-Hill’s incentive fee, paid in addition to the allowable costs, is estimated to be about $340 million but will vary—from $130 million to $460 million— depending on the contractor’s performance. The fee is tied partly to schedule and partly to cost. Kaiser-Hill will earn the “target fee” of $340 million if it completes its work within a specified schedule and cost range: between December 16, 2006, and March 31, 2007, at a cost between $4 billion and $4.2 billion. Kaiser-Hill can earn an additional “schedule incentive” fee of up to $20 million and an additional “cost incentive” fee of 30 cents of every dollar saved from the target cost of $4 billion. Conversely, for late or more costly completion, Kaiser-Hill loses a portion of its fee. For each day that closure is delayed beyond March 31, 2007, Kaiser-Hill loses about $55,000. And for each dollar of costs in excess of $4.2 billion, Kaiser-Hill loses 30 cents of its fee. In no case, however (aside from fee reductions stemming from safety violations), will the contractor earn a fee less than $130 million or more than $460 million. The previous Kaiser-Hill contract. This contract, which cost about $2.9 billion, including the fee, took effect in July 1995 and ran through January 2000. The cost of DOE’s Rocky Flats Field Office. This cost is about $553 million, from fiscal year 1996 through the target closure date. This cost is for staff salaries, site utilities, litigation support, regulatory oversight, and other expenses. The cost incurred by other DOE sites and organizations in support of Rocky Flats’ closure. This cost—about $130 million, from fiscal year 1996 through target closure—is for such activities as certifying shipping containers, providing transportation for nuclear materials and wastes, and receiving and storing Rocky Flats’ materials and wastes. Although DOE has not quantified all of these sites’ costs to support Rocky Flats’ closure, DOE officials provided us with the major ones. For example, DOE is spending about $35 million to modify a storage facility at Savannah River to accommodate nuclear material shipped from Rocky Flats. Also, through 2006, DOE will spend between $17 million and $22 million to ship transuranic waste from Rocky Flats to WIPP. In addition, for the same period of time, the estimated cost of the DOE headquarters office that supports Rocky Flats’ closure is about $12 million. The $1.4 billion estimate of long-term (post-closure) costs is made up of two components: Site monitoring and maintenance activities. Through 2070, these are estimated to cost $400 million. After site closure, DOE or some other entity will need to monitor environmental conditions at the site and maintain the systems and structures that remain there (such as the groundwater treatment systems and monitoring wells). Post-retirement benefits for Rocky Flats’ contractor employees. These benefits—about $1 billion through 2070—include pensions and medical and life insurance. According to a Rocky Flats budget official, DOE is liable for such costs under the provisions not only of the current Kaiser-Hill contract but also of previous site management contracts with Kaiser-Hill and its predecessors (i.e., Rockwell International Corporation; EG&G, Inc.; and Dow Chemical Company). This official also said that DOE has recently assembled a task force to evaluate post-closure liability issues at DOE’s closure sites. Table 2 summarizes the estimated Rocky Flats closure and post-closure costs. The projected costs, both through closure and after closure, could be substantially greater than those shown in table 2, as explained below. Changes to the scope of the project or the contract requirements could result in changes to the target cost and the duration of the project. Changes could result, for example, if DOE imposed new requirements for characterizing waste or failed to supply a service or item specified in the contract. If changes are outside the scope of the existing contract or if DOE fails to deliver as required and thereby jeopardizes the contractor’s schedule and, thus, its potential fee, Kaiser-Hill could seek relief using a standard federal contracting provision called a “request for equitable adjustment.” The relief could take the form of adjustments to the project’s schedule, contract cost, or both. In early November 2000, Kaiser-Hill submitted to DOE its first request for equitable adjustment, seeking a $2 million cost increase and a $170,000 fee increase for harm caused by DOE’s directed change to the design of a shipping container for plutonium. In late November 2000, Kaiser-Hill submitted another request, which sought a $1 million cost increase for delays and cost increases—in fiscal year 2000 alone—caused by the change in waste acceptance criteria imposed by New Mexico for the disposal of transuranic waste at WIPP. Kaiser-Hill has also advised DOE that it plans to submit another request for equitable adjustment related to the change in the WIPP waste acceptance criteria—this request will be for schedule delays and cost increases for fiscal years 2001 and beyond. The contractor was considering about eight additional requests for equitable adjustment, any of which could increase the final cost—or extend the closure date—of the project. At the time of our review, DOE officials were reviewing the requests in preparation for negotiations with Kaiser-Hill. Thus, the two parties had not yet reached agreement on what adjustments, if any, would be made to the contract’s schedule and cost as a result of the first two requests. If closure is delayed because Kaiser-Hill is late in completing its cleanup activities, the financial effect could be significant. For example, if closure were delayed by 2 years, the project’s cost would increase by about $530 million; these costs would be paid by DOE. As we discuss later in this report, we have substantial reason to expect that delays will occur. The total cost at Rocky Flats would also rise if any claims for monetary damages are brought against DOE to compensate for injuries to natural resources, such as wildlife, fish, and lakes, on or near the site. Since some injuries to natural resources may be addressed in a cleanup, the amount of damages for which DOE may be liable depends, in part, on the nature and extent of the remedial action. According to DOE officials, no claims for damages have been filed at Rocky Flats and DOE has not yet estimated the extent of its potential liability for natural resource damages at that site.Costs resulting from such claims for monetary damages are not included in the estimated costs of the site’s cleanup and closure presented in this report. The estimated cost of site monitoring and maintenance activities assumes that no further environmental problems will surface at the site because of DOE’s past activities. However, it is unclear whether this assumption will prove to be correct. Furthermore, while the DOE estimate includes costs through 2070, some costs will continue beyond that date. Some monitoring activities, for example, are likely to continue in perpetuity. To close Rocky Flats on time and within budget, Kaiser-Hill and DOE must overcome major challenges: (1) getting the automated plutonium- packaging system to reliably perform at the rate needed for timely completion; (2) overcoming limitations on the available number of transportation casks and on loading capability for transuranic waste; (3) completing the planning necessary to accomplish the cleanup, demolition, and remediation of the site’s structures, most of which are scheduled for the final 2 years of the contract; (4) clarifying uncertainties about the extent of contamination and cleanup requirements at the site; and (5) preventing safety problems, which can result in work shutdowns that can delay cleanup work. Kaiser-Hill and DOE are working to address these challenges, but their number and complexity make closure by 2006 unlikely. Kaiser-Hill’s own risk assessment concluded that it had only about a 15-percent probability of meeting the target closure date. Furthermore, after 8 months of performance under the new contract, the project was already slightly over cost and behind schedule. The development and implementation of the site’s plutonium stabilization and packaging system—a prototype for the Department—has faced numerous delays. The system was designed to package plutonium metals and oxides in long-term storage containers. Plutonium reacts with water to form hydrogen gas and, in some forms, can spontaneously ignite when exposed to oxygen. Accordingly, the first stage of the system is designed to stabilize the plutonium by heating it in furnaces to very high temperatures (at least 950 degrees Celsius) to remove moisture and impurities, and thereby stabilize the oxides. The second stage of the system—the automated packaging portion—will place the plutonium metals and oxides into specially designed, long-term storage containers, consisting of three nested cans. All the packaging steps, including laser- welding the lids to the containers, will be controlled remotely. A number of problems delayed the system’s startup and increased its costs. For example, the laser-welds on the container lids proved to be porous when tested inside the negative pressure of a glove box. The porosity had not been apparent in earlier tests at normal atmospheric pressure. Design and construction flaws caused delays as well. For example, the design of the furnaces in the stabilization portion of the system did not allow adequate access for maintenance, and the furnaces were unreliable. Consequently, the stabilization portion of the system, originally designed to be automated, was replaced by a manually operated process. In addition, the ceramic shelves in the manual furnaces took too long to heat up and had to be replaced with metal shelves. These and other problems are now resolved. However, the delays increased the system’s cost from an original estimate of less than $30 million to over $85 million, as of September 2000. In January 2001, Kaiser-Hill estimated that the system would start operating in March 2001, but the system had not yet completed operational readiness testing, and ongoing problems may further delay startup. One ongoing problem is that in August 2000, DOE directed Kaiser-Hill to ensure that it could meet the plutonium stabilization and packaging requirements issued at that time by the Savannah River Site, where Rocky Flats’ packaged plutonium will be sent for storage, pending its ultimate disposition. These requirements, for plutonium to be stored at Savannah River, include developing and implementing a plan for testing the container welds and meeting criteria for monitoring and blending the plutonium. Kaiser-Hill concluded that the additional tasks required would increase the cost and delay the start of the plutonium stabilization and packaging system. However, there is some debate between DOE and the contractor about whether these requirements are in addition to those that were included under the closure contract. If it is determined that Kaiser- Hill was directed to meet requirements in addition to those in the contract, Kaiser-Hill could request an equitable adjustment to the contract. Once the system begins operations, it is not clear if it can sustain the necessary production rate to allow the site’s closure by the target date. According to DOE officials, to complete the plutonium packaging on time before delays compressed the schedule, the system needed to operate only about 10 percent of the time. Under its compressed schedule, though, the system must operate over 70 percent of the time. In effect, under the compressed schedule, the packaging portion of the system will have to produce eight containers a day—one container for every 2 hours of operation. Although Kaiser-Hill officials believe that this production rate is within the system’s capability, no empirical evidence supports this view.If the system cannot meet its expected production rate, many other cleanup activities will be delayed because they cannot begin until the completion of the system’s activities. Because of continuing concerns about the viability of the system, Kaiser- Hill is studying alternatives for packaging the site’s plutonium; the study had not been completed at the time of our review. Thus, it is unclear whether a viable alternative exists and could be installed in time to complete plutonium-packaging operations as scheduled by May 2002. Removing the transuranic wastes is one of the most difficult obstacles to the site’s closure because of the large quantity of wastes and the complex challenges they present. Kaiser-Hill must ship a total volume of transuranic waste comparable to over 80,000 drums (55 gallons each), or more than 2,000 truckloads. Kaiser-Hill’s ability to ship this waste off-site to WIPP by the site’s target closure date is questionable for the following two reasons: Limited Availability of Transportation Casks Could Affect the Shipping Rate. Transuranic waste must be shipped to WIPP in special transportation casks approved by the Nuclear Regulatory Commission.DOE made a commitment to deliver 1,440 casks per year to Rocky Flats during the peak shipping years (fiscal years 2002-4). This number is sufficient for fiscal year 2004 but not for fiscal years 2002 and 2003. For example, to meet Kaiser-Hill’s projected shipping schedule for fiscal year 2003 (696 shipments), DOE would need to provide 2,088 casks, or 648 more than DOE has agreed to provide. It is unclear whether DOE will provide enough additional casks. According to a DOE transuranic waste program manager, DOE will supply the 1,440 casks it has agreed to, but it will provide additional casks on a “best efforts basis, considering the schedules and requests from other sites.” Other DOE sites, also under pressure to ship their waste by specific dates, will be competing for use of the casks. Figure 3 shows workers loading drums into a transportation cask. Loading Capability May Not Meet Shipping Needs. Kaiser-Hill may not have adequate loading capability to support its shipping needs, especially as the compressed schedule increases the projected need for loading capability in the site’s peak shipping years. The waste is loaded by crane into the transportation casks on flatbed trailers. The site currently has only one loading facility; two more comparable loading facilities are under construction and expected to be completed by November 2001. However, even with all three loading facilities operating, the amount of waste to be shipped is expected to exceed loading capacity for the next several years. To meet the shipping schedule for the site’s peak shipping years, the contractor will have to consistently operate all three loading facilities at capacity. For example, to make the number of shipments scheduled for fiscal year 2003, Kaiser-Hill will need to make over 13 shipments each week. Kaiser-Hill officials believe that the three shipping facilities together can meet the shipping schedule. This capability has not been demonstrated and is in doubt. To make 13 shipments per week, each of the three loading facilities will have to consistently load four or more truckloads of waste each week, with little or no margin for problems or delays. However, largely owing to outside factors influencing its performance, such as building shutdowns for safety problems and changes in the requirements for characterizing transuranic waste for disposal, the existing loading facility was able to perform at this level only 1 week during fiscal year 2000. Two main factors have contributed to the compression of the site’s transuranic waste shipping schedule. First, numerous delays in opening DOE’s only transuranic waste disposal facility (WIPP) delayed the shipping schedule. Although fully constructed in 1988, WIPP was not certified to receive transuranic waste until 10 years later. Kaiser-Hill sent its first shipment to WIPP in June 1999—several years later than planned. As a result, the shipments that had been scheduled for earlier years had to be added to later years’ shipping schedules. Second, the shipping schedule has been compressed by changes to the requirements for characterizing transuranic waste prior to shipment. The WIPP waste acceptance criteria, which prescribe how the waste must be characterized, were revised, and New Mexico subsequently imposed additional requirements in conjunction with allowing WIPP to accept mixed waste. These revisions had not been incorporated into the contract, but DOE directed Kaiser-Hill to comply with them. As a result of the changes, the contractor stopped shipping wastes to WIPP for 4 months while it determined what changes needed to be made, implemented them, and obtained the necessary approvals for shipping the wastes to WIPP. Shipments were also delayed because 2,000 drums of waste that had been characterized under the previous requirements had to be recharacterized before they could be shipped. In addition, the new requirements increased by thousands the number of waste drums that had to go through some or all of the steps in the characterization process (depending on the type of waste): x-raying the drums to determine their contents, opening the drums to verify their contents visually, and sampling the drums to analyze their wastes and gases, among other actions. Complete characterization of a single drum takes approximately 2 to 4 weeks, costs an average of $10,000, and generates about 800 pages of required documentation. Because of the changed characterization requirements, Kaiser-Hill shipped only about 25 percent of the transuranic waste that it had projected it would ship in fiscal year 2000, therefore adding the remaining amount to other years’ shipping schedules. DOE and Kaiser-Hill are working to overcome these challenges, but it is unclear whether all of the transuranic waste can be shipped off-site by December 2006. In August 2000, the state of New Mexico approved several DOE requests for modifications to its WIPP permit. These modifications streamlined some of the new requirements. For example, for one type of waste, the new requirements called for gas sampling and analysis in 100 percent of the drums (previously, no sampling was required for this waste type). The approved modification reduces this sampling requirement to 10 percent of the drums of this particular waste type. DOE is submitting additional requests for permit modifications to further ease the requirements. Kaiser-Hill has limited flexibility to adjust its schedule for shipping transuranic waste. Although the contractor does not have the characterization and loading capability to move its projected shipments from peak shipping years to those earlier years with less shipping demand, it plans to increase its shipping rates by operating multiple shifts on existing equipment and acquiring additional equipment as needed, such as the two additional loading facilities. In addition, the contractor is looking for ways to speed up various steps in the characterization process, such as acquiring automated analysis units to reduce the gas sample analysis time to hours instead of weeks, thereby reducing a key bottleneck in the characterization process. These are important improvements because Kaiser-Hill did not build extra time into the schedule to deal with delays related to characterizing, loading, and shipping the site’s transuranic wastes, and it scheduled shipments right up to the target closure date. If Kaiser-Hill falls behind on its aggressive characterization and shipping schedules, subsequent delays will occur in the cleanup and demolition of the facilities housing the characterization and loading operations, as well as the storage facilities. Kaiser-Hill has to overcome numerous challenges to clean up and remove over 720 structures remaining at the site. These structures range in size and complexity from multistory, very large plutonium-processing buildings to small shacks and outbuildings. Kaiser-Hill’s strategy is to demolish over 475 structures in the last 2 years of the project. According to DOE and Kaiser-Hill officials, as part of the contractor’s strategy to reduce risk, most of these buildings will have been decontaminated and otherwise cleaned out so they can be safely left standing while awaiting demolition. Kaiser-Hill expects that this approach will allow for a more efficient and continuous demolition phase. The cleanup and removal of the plutonium buildings will be especially difficult because of their size and because they contain severe radioactive and hazardous contamination and large quantities of processing equipment. The six remaining plutonium buildings contain a total area of about 925,000 square feet. In several instances, parts of these buildings were severely contaminated by fires or accidents. Some rooms, referred to as “infinity rooms,” were sealed off because of their extremely high radioactive contamination. In addition, when production activities were suddenly and unexpectedly halted in 1989, plutonium and other dangerous materials were simply left in equipment and processing pipes. Buildings other than the plutonium buildings are contaminated as well—with beryllium, uranium, or other radioactive substances. And even buildings without such contamination can present challenges; because many were built in the 1950s and 1960s, they may contain asbestos or other hazardous materials. Kaiser-Hill has not fully planned how it will clean up and demolish the site’s structures within the time available. Without detailed plans, Kaiser- Hill cannot ensure that the work will proceed in a timely and successful manner. The contractor’s baseline includes some time for this work, but in several instances, insufficient detail exists to determine if the schedule is realistic. For example, Kaiser-Hill officials have not yet planned how they will clean up some radiologically contaminated facilities, such as a heavily contaminated two-story storage vault and the equipment used to stack and retrieve plutonium within it. In addition, Kaiser-Hill has allowed itself limited time in the schedule to address unforeseen problems. For example, Kaiser-Hill had allowed only 40 days for such problems in the 7-year schedule for the cleanup and demolition of one of the plutonium buildings. In the first 8 months of cleanup, all of these days had been used up, and the cleanup of this building was behind schedule. Kaiser-Hill identified improved planning for the cleanup of some its plutonium buildings and buildings with other contaminants as one of its top risk mitigative actions. Kaiser-Hill officials said that they intend to develop more detailed plans over the next year. In addition, Kaiser-Hill plans to hire an outside expert to develop a detailed cleanup and demolition plan for the hundreds of remaining structures. How much environmental remediation must be done, and how much it will cost, is not yet certain. For one thing, the extent of soil contamination on the site is not fully understood because the industrial area, where nuclear weapons production took place, has not been fully characterized. The soil under many of the former production buildings is contaminated, but the depth and degree is not yet known. Ongoing activities in the industrial area and the presence of the buildings themselves have prevented thorough characterization of the contamination. Until the soil in the industrial area is fully characterized, the full extent and cost of the required cleanup will not be known. Also uncertain is “how clean is clean;” that is, how much plutonium- contaminated soil must be removed. DOE, the regulators, and the site’s other stakeholders have not reached agreement on an appropriate level of soil cleanup, although several different levels are being considered. Pending a final decision, an addendum to the Rocky Flats Cleanup Agreement set an interim soil level of 651 picocuries of plutonium per gram of soil. This level assumes that a future resident on the site could not receive a dose higher than 85 millirems per year from the plutonium remaining in the soil. Another interim level being considered—115 picocuries per gram—results in a reduced maximum dosage of 15 millirems per year for that future resident. Other levels more stringent than these two are also under consideration. Because stakeholders were concerned about the sufficiency of these interim levels if they were to be used as the final cleanup levels, and as part of the periodic review process required under the Rocky Flats Cleanup Agreement, DOE funded an independent study by a private contractor, Risk Assessment Corporation. On the basis of an assumption of land use by a resident rancher family, the resulting February 2000 report recommended a level of 35 picocuries of plutonium per gram of soil. However, the ultimate soil cleanup could also be affected by the need to meet surface water quality standards because soil contamination can enter surface water through erosion. A level of 10 picocuries per gram or lower, the use of engineered controls (such as ditches and holding ponds), or both may be required to ensure compliance with surface water standards. The soil cleanup level established for the site could have a dramatic effect on the scope and cost of cleanup. Although DOE officials believe that the level has not yet been determined, Kaiser-Hill assumes that, under the contract, the interim level of 651 picocuries will be used. If the final decision on the level varies from this level, cost and schedule could change significantly. For example, the work scope and cost of cleaning up the 903 Pad, one of the site’s biggest environmental remediation projects, could differ dramatically, depending on the cleanup level. Table 3 shows estimates of these differences. The three parties to the Rocky Flats Cleanup Agreement—DOE, EPA, and the state of Colorado—are currently determining an appropriate soil level for the site. A decision is expected by the end of fiscal year 2001. If the level selected differs from the level determined to be prescribed by the closure contract, Kaiser-Hill could request an equitable adjustment to the contract. Many of the environmental remediation activities are scheduled for the final years of the closure project, when the limited amount of time remaining before the target date makes changes more difficult to accommodate. Kaiser-Hill’s ability to respond to problems may also be limited. Often owing to the logical sequencing of activities, about 65 percent of the site’s remediation activities are scheduled for the last 2 years of the closure project. However, Kaiser-Hill has no time built into the remediation schedule to address unexpected problems or delays in preceding activities. As of September 30, 2000, Kaiser-Hill’s schedule had already projected that some of the last remediation activities in the industrial area would occur after the target closure date of December 15, 2006, because of cleanup delays experienced in the plutonium buildings. Numerous safety violations have occurred at the Rocky Flats site. In fiscal year 2000, 49 safety violations were reported, up from 27 the previous year. These safety violations—mainly procedural violations—ranged in severity from relatively minor, such as inadequate or improper maintenance of equipment and paperwork problems, to major, such as improperly handling equipment, which could have caused significant injury. Safety violations can result in significant work stoppages and schedule delays because, during a safety-related building shutdown, no cleanup activities or processing operations can occur. According to a DOE safety official, shutdowns owing to safety problems occur periodically in the site’s nuclear facilities—including the major plutonium buildings—and usually last hours or days, but sometimes weeks or even months. In fiscal year 2000, for example, work practices not in compliance with approved safety procedures resulted in a 3-month shutdown of a building that was used to store transuranic waste that had already been characterized. By the time the shutdown ended, the characterization requirements had changed, so the waste could not be shipped until it was recharacterized. Safety violations can also result in financial penalties. Since fiscal year 1996, Kaiser-Hill and its subcontractors have received eight Price- Anderson Act enforcement actions for significant violations of nuclear safety requirements and were assessed $353,750 in penalties. These violations included noncompliance with radiological control procedures, resulting in worker contamination; lack of controls over procurement procedures, resulting in the use of substandard waste containers; and failure to implement corrective actions sufficient to address previously identified nuclear safety problems. In addition, under the safety provisions specified in the closure contract, in July and November 2000, DOE assessed fines against Kaiser-Hill totaling $410,000 (in fee reductions). Under the contract, DOE can fine Kaiser-Hill for events or incidents that are considered to be symptomatic of a breakdown in the safety management system. These fines resulted from a series of violations, including unsuitable handling of low-level wastes, improper operation of a ventilation system in a plutonium building, and work control events involving hazardous electrical work and potential radioactive contamination. Kaiser-Hill reports that it recognizes that safety is one of the company’s highest priorities and that it has set targets for reducing the number and frequency of safety violations. The contractor reports that, since it took over the management of the site in 1995, it has improved the overall safety performance at the site, as measured by radiological violations, criticality infractions, and recordable employee injury rates. Despite these data, Kaiser-Hill is concerned about the recent negative trend in nuclear safety performance at the site. To address these safety concerns, Kaiser-Hill reports that it is taking several steps. For example, it is (1) encouraging workers to identify potential safety issues before they become a matter of regulatory concern and penalty, (2) providing additional worker training to address various safety issues, and (3) assessing and revising work control processes for the site’s nuclear facilities. Even with these efforts, it is unclear if Kaiser-Hill can sufficiently improve safety to avoid delaying the site’s closure. The trend in the number of safety violations is not encouraging. From July 1999 through September 2000, the contractor met its monthly target for reduced safety violations (of operational and technical safety requirements) only once. Furthermore, in the spring and summer of 2000, the Rocky Flats on-site representatives of the Defense Nuclear Facilities Safety Board reported on recurring problems over the previous year caused by workers who did not follow safety procedures. For example, they reported on (1) informal changes being made to procedures without evaluating their impact on safety, (2) conduct of activities that were not authorized, and (3) failure to comply with safety procedures for planning and executing several cleanup activities. DOE is concerned about the number and severity of safety violations that have occurred since the inception of the current contract. In a January 5, 2001, letter to the president of Kaiser-Hill, the Rocky Flats Field Office manager criticized Kaiser-Hill’s failure to improve its safety record. Among the concerns she cited were that Kaiser-Hill (1) lacked an adequate process for identifying key information on safety incidents, including their root causes, and ensuring that lessons learned from safety incidents are incorporated into future work activities; (2) lacked effective work controls; and (3) had not developed an effective safety and health organization. She also stated that Kaiser-Hill workers and supervisors, especially those engaged in critical activities involving the handling of material, did not understand their roles and responsibilities. She concluded that Kaiser-Hill’s management was inadequate, “at every level and in each project,” to ensure safe operations at the site. Within days of this letter, the Rocky Flats Field Office manager and the president of Kaiser-Hill sent a joint letter to every Rocky Flats worker discussing the unacceptable trend in safety incidents at the site and emphasizing the importance of safety in all aspects of the project. This letter also stated that Kaiser-Hill would be developing an improvement plan and response to DOE’s concerns, and that DOE would assess the effectiveness of the corrective actions. At the time of our review, Kaiser-Hill was developing a comprehensive plan to improve its safety and compliance performance, and expected to submit this plan to DOE in February 2001. Considering the challenges and uncertainties that must be overcome to achieve the site’s closure, Kaiser-Hill’s own risk assessment paints a bleak picture of the likelihood of closing the site by the December 2006 target date. Each quarter, Kaiser-Hill performs a risk assessment to identify and assign priority to risks and uncertainties that represent the greatest threat to successfully completing the closure project, so that they will receive the necessary management attention. In its December 2000 risk assessment, Kaiser-Hill estimated that it had only about a 15-percent chance of achieving the site’s closure by December 15, 2006; a 35-percent chance of achieving the site’s closure by March 31, 2007; and a 97-percent chance of achieving closure by December 2008—2 full years past the target date. This assessment is considerably more favorable than the one reflected in our April 1999 report, when Kaiser-Hill’s risk analysis concluded that the contractor had only a 1-percent chance of closing the site by the end of fiscal year 2010. The recent improved risk assessment is due in part to Kaiser-Hill’s and DOE’s overcoming several obstacles to closure that were identified in our April 1999 report, such as the opening of WIPP and a decision on the disposition of the uncontaminated rubble from the demolition of the site’s buildings. Despite this progress, another indication that closure may be delayed is Kaiser-Hill’s performance to date under the closure contract. After the first 8 months of the new contract, Kaiser-Hill’s performance data showed that the project was already slightly behind schedule and over cost. However, Kaiser-Hill officials remain hopeful that they can recover from the schedule slippages and complete the closure project on time, even though they know that doing so will require overcoming significant obstacles. Although both Kaiser-Hill and DOE have made considerable progress on their respective plans for managing the Rocky Flats closure project, further improvements are needed to help ensure that they meet the target date for the site’s closure. At the time of our review, Kaiser-Hill and DOE were working to complete their plans, which are intended to clearly delineate each party’s responsibilities for the closure project, the time frames associated with each responsibility, and the effect of delays. Kaiser-Hill was making changes to its own baseline in response to DOE’s review comments. As for DOE’s plan, it was still under development, but many of its elements appear to be sound, including the process of documenting the tasks required and the time frames for completion. However, two additional components would help DOE to implement the plan—a clearly established authority for reconciling the competing demands for resources among DOE’s organizations and a process for limiting the amount of time that a problem can languish unresolved. These features are not part of the plan now because DOE has been focused on the more basic components of the plan and DOE’s senior managers have had only limited involvement in the project. However, the absence of these implementation components in the plan has affected DOE’s progress in obtaining transportation resources and certified shipping containers for Rocky Flats. It is important to address these implementation issues for several reasons, including that implementation of certain aspects of the plan is already under way and any delays in completing key project activities can affect subsequent activities and ultimately the project’s completion date and cost. Kaiser-Hill is making changes and improvements to its baseline in response to concerns DOE raised during its review of the contractor’s baseline. Kaiser-Hill submitted its baseline for DOE’s review on June 30, 2000. At the time of our review, the Department had not yet agreed to the baseline, pending the resolution of its concerns. The following are among the many improvements that Kaiser-Hill is making to the baseline: Developing a more detailed strategy for cleaning up the major plutonium buildings and reassessing the cleanup work planned for other structures. In its review of the June 30, 2000, baseline, DOE noted that Kaiser-Hill had not provided enough detail to clearly convey the work it planned to do to clean up some of the major plutonium buildings and to conduct environmental remediation studies and risk assessments. Accordingly, Kaiser-Hill agreed to provide additional detail in these areas. Ensuring compliance with regulatory and oversight requirements. DOE had commented that the baseline was not fully consistent with commitments to regulatory and oversight bodies and with requirements contained in the contract. For example, DOE had agreed to meet the Defense Nuclear Facilities Safety Board’s recommendation that the site’s plutonium would be packaged into long-term storage containers by May 30, 2002. However, Kaiser-Hill’s baseline did not show this work being completed until August 2002. This inconsistency has since been resolved: DOE directed Kaiser-Hill to meet all commitments to the site’s regulatory and oversight bodies, and Kaiser-Hill adjusted the baseline to accommodate this direction. Addressing schedule insufficiencies. DOE questioned whether Kaiser- Hill had included sufficient time in its schedule to respond to unanticipated problems, deal with uncertainties, and still meet the target closure date. Kaiser-Hill officials had a different view of whether its baseline schedule was realistic. They stated that because many of the scheduled activities have never been performed before, it is not known whether the time they have allotted to accomplish these activities is insufficient. Nevertheless, Kaiser-Hill officials acknowledged that slippage on any one of several key activities would delay subsequent activities and could ultimately delay the site’s closure. Accordingly, they have been working to build in additional time without extending the schedule. For example, they are seeking more efficient ways to accomplish tasks and are considering alternatives to potentially troublesome systems and processes. In August 2000, DOE’s Office of Site Closure began developing a detailed plan for carrying out the Department’s responsibilities for Rocky Flats’ closure. When completed, the plan is intended to formalize DOE’s strategy to deliver services and items to Kaiser-Hill, such as transportation for nuclear materials and off-site locations for storage and disposal of those materials. DOE expects that this plan will increase the likelihood of DOE’s meeting its responsibilities in a timely way and thus avoid adversely affecting the project’s completion date and cost. Because Kaiser-Hill depends on DOE to deliver services and items critical to completing various aspects of the project, the contractor may not be able to complete the closure project as scheduled, should DOE fail to deliver on time. DOE intends for its plan to identify each service or item for which DOE is responsible, the DOE organizations involved and their responsibilities, and a schedule for accomplishing the necessary activities. For example, concerning the problem of finding off-site storage and disposal locations for all of the site’s so called “orphan” wastes and materials, the Office of Site Closure is compiling a complete list of these orphans; examining possible storage, treatment, and disposal locations; determining the regulatory and other requirements that must be met; and establishing time frames for the necessary activities. Once DOE has a strategy for addressing these and other issues, it intends to obtain agreement from the responsible DOE organizations and sites that they will provide the necessary services and items within the specified time frames. In addition, DOE intends for its plan to improve the monitoring of the project to surface problems or challenges that need to be addressed. As designed, DOE’s plan has many of the elements needed to serve as a useful tool to manage DOE’s responsibilities; however, we are concerned that two issues may hamper the plan’s implementation. First, DOE has not designated an individual or organization with the requisite authority to make decisions and resolve conflicts that arise among the DOE organizations and sites over competing priorities or limited resources. The Office of Site Closure, which has spearheaded the plan’s development, does not have the authority to resolve problems or conflicts as they arise between DOE organizations, such as Environmental Management and Defense Programs. Because of this lack of a recognized authority to make such decisions, some issues with the potential to adversely affect Rocky Flats’ closure have not been resolved. For example, Rocky Flats has had difficulty obtaining assurance that sufficient transportation resources (trucks, trailers, and personnel) will be available when needed to ship its plutonium and uranium. These resources are managed by an organization within Defense Programs, which routinely gives priority to its own activities over the activities of Environmental Management—such as Rocky Flats’ cleanup and closure. Most of Defense Programs’ transportation resources are committed to shipments of nuclear materials from other sites, so the resources may not be available to ship Rocky Flats’ materials when needed to meet its target closure date. Officials from Environmental Management have been trying to arrange for the transportation resources needed by Rocky Flats through informal discussions with officials from Defense Programs, but they have not been completely successful. According to a DOE official evaluating DOE’s transportation needs and resources and another from the Office of Site Closure, this situation has remained unresolved for months because no individual or organization currently involved in the process has the recognized authority or is at a high enough management level to determine what trade-offs should occur across the DOE organizations or how the Department’s limited transportation resources should be put to their most effective use. If the transportation resources are not available when needed, Kaiser-Hill will have to continue to store the nuclear materials, potentially delaying the cleanup and removal of the storage buildings. The second implementation problem is that DOE does not have a mechanism in place to limit the amount of time that an issue can languish unresolved before it is referred to the appropriate authority for resolution. Some issues that affect DOE’s and Kaiser-Hill’s ability to close Rocky Flats by 2006 have remained unresolved for long periods of time. For example, DOE has not been able to certify a transportation container needed for Rocky Flats to ship its plutonium off-site, although this container has been in various stages of the certification process since 1988. The certification process requires coordination among many DOE organizations, sites, and laboratories. In a November 2000 report on nuclear material container issues, DOE’s Inspector General concluded that because DOE did not adequately coordinate among the various entities responsible for container activities, it failed to certify, in a timely manner, containers needed to ship plutonium materials from Rocky Flats to Savannah River. As of January 2001, this problem had not been resolved, and DOE expected additional delays in the certification of the transportation container for Rocky Flats’ plutonium metals and oxides. Both Kaiser-Hill and DOE officials see the container certification delays as one of the major obstacles to getting the site’s plutonium shipped to Savannah River. If DOE does not certify this container by the time the plutonium packaging system is operational, currently scheduled for March 2001, the subsequent cleanup and closure activities could be delayed. These two features are not part of DOE’s plan now because the Office of Site Closure has been focused on developing the plan and has focused little attention on the plan’s implementation. In addition, to date, DOE’s senior managers have not been significantly involved in the plan’s development or its implementation. However, DOE cannot wait until the plan is complete to start implementing it. Instead, officials from the Office of Site Closure are implementing components of the plan as they are developed. For example, they are already working to obtain agreement from various DOE entities to provide the services and items necessary to ship the site’s special nuclear materials off-site. Because of the tight time frames for the cleanup and closure of Rocky Flats, key activities relating to the site’s special nuclear materials must be completed on time or they will affect subsequent cleanup activities, ultimately delaying the site’s closure and increasing its cost. The need for high-level managers’ awareness and oversight of DOE’s activities in support of Rocky Flats’ closure was also raised by DOE’s Acting Deputy Director for Management and Administration in a January 2001 memorandum. After reviewing the closure project’s administration, he recommended that DOE establish a special management control mechanism to ensure appropriate visibility and resolve problems that arise. However, as of February 2001, DOE was still considering these recommendations. An Office of Site Closure official stated that implementing DOE’s plan will be challenging, especially without the requisite authority and a process in place to raise and resolve issues in a timely manner. Closing the Rocky Flats Environmental Technology Site by December 2006 is a laudable goal and a formidable challenge, especially given the magnitude and complexity of the cleanup project. Kaiser-Hill has made significant progress in the cleanup of the site on several fronts. However, because of the scope and complexity of the remaining work, and the compressed schedule for completing it, there is little margin for resolving the many obstacles that could delay the completion date. Because we found no specific governmental action that would resolve the challenges Kaiser-Hill faces, the contractor needs to continue its efforts to address these challenges quickly and effectively, with diligent attention to safety. However, DOE can take actions to establish the decision-making authority and process for implementing its plan and thereby improve the likelihood of achieving the target closure date and cost. Doing so is important because it will be costly to DOE to keep the Rocky Flats site operating beyond 2006. Even with these actions, because of the many challenges that Kaiser-Hill must overcome, site closure by 2006 is unlikely. However, completing the cleanup and closure of Rocky Flats close to the target date represents the reduction of significant financial and environmental liabilities for DOE and the public. To improve the chance of achieving the target closure date and cost, and to minimize schedule extensions and cost increases associated with any closure delays, we recommend that the Secretary of Energy develop an implementing strategy for DOE’s plan at Rocky Flats that (1) clarifies the authority and responsibility for reconciling competing demands for DOE’s resources needed to support Rocky Flats’ closure and (2) specifies a process by which these differences between DOE organizations are identified and resolved within specified time frames. We provided the Department of Energy and Kaiser-Hill Company, L.L.C., with a draft of our report for their review and comment. DOE said that the report was a thorough and credible assessment of the challenges facing the Rocky Flats Closure Project and the Department’s prospects of meeting very aggressive cost and schedule objectives for this complex project. DOE also agreed with our observations and recommendation concerning the need for a means to resolve conflicts that arise as part of the complexwide coordination of activities needed to support Rocky Flats’ closure. However, DOE raised two main issues about the content of the report. First, DOE noted that our 1999 report on this project included information that there was less than a 1-percent chance of meeting the target closure date, which was 2010 at that time. DOE said that the contractor’s more recent assessment of a 15-percent chance of meeting the 2006 target closure date was a significant improvement that should be recognized in our draft report. We modified our final report to include this information. Second, DOE said that several of the challenges we discussed in our 1999 report, such as the recycling of uncontaminated building rubble and the delays in opening WIPP, had been resolved but that we did not explicitly mention this progress in our draft report. We modified our final report to include this information. DOE also provided several technical corrections, which we incorporated as appropriate. DOE’s comments are presented in appendix I. Kaiser-Hill said that our draft report was accurate and indicated a strong understanding of the challenges and obstacles facing the Rocky Flats Closure Project. However, Kaiser-Hill raised several issues concerning the report. First, Kaiser-Hill mentioned the two concerns that DOE had raised above. As noted, we modified our final report to address those concerns. Second, Kaiser-Hill said that our draft report should acknowledge that the company had emphasized safety in its operations at the site since the first contract was signed in 1995 and had seen consistent improvement in some safety indicators until the recent development of a negative safety trend. We clarified this information in our final report. Finally, Kaiser-Hill said that even if closure occurs 1 or 2 years after the 2006 target date, the public would still receive significant safety and financial benefits but that our draft report did not explicitly recognize this point. Although our draft report acknowledged the benefits of closing the site decades earlier than originally planned, we added information to our final report to emphasize these benefits. Kaiser-Hill’s comments are presented in appendix II. To obtain the necessary information on the closure project’s status and cost, and the likelihood of meeting the target closure date, we visited Rocky Flats’ facilities and observed cleanup activities, reviewed documents, and interviewed DOE and contractor officials. We also contacted officials and reviewed documents provided by DOE’s headquarters and other DOE field locations. We analyzed Kaiser-Hill’s baseline and various planning, budget, and cost documents and other records. We also reviewed DOE’s draft plans for meeting its contractual cleanup commitments and other DOE records pertaining to DOE’s responsibilities under the contract and its oversight of Kaiser-Hill’s activities. In addition, we reviewed records and interviewed officials of the regulatory and oversight agencies with cognizance for the site’s cleanup— EPA’s Region VIII Office in Denver, the Colorado Department of Public Health and Environment in Denver, and site representatives of the Defense Nuclear Facilities Safety Board located at Rocky Flats. We also reviewed documents and attended meetings of various Rocky Flats stakeholder groups, including the Rocky Flats Citizens Advisory Board, Rocky Flats Coalition of Local Governments, and Rocky Flats Cleanup Agreement Stakeholder Focus Group. To determine the management actions needed, if any, to improve the likelihood of the project’s success, we compared the major challenges affecting the closure of the site with Kaiser-Hill’s and DOE’s plans for addressing them. We assessed whether the planned actions appeared to address the important aspects of these challenges. We also discussed the challenges and planned actions with DOE and Kaiser-Hill officials, regulatory and oversight agency officials, and stakeholders involved in the cleanup and closure of the Rocky Flats site. We conducted our review from May 2000 through February 2001 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Honorable Spencer Abraham, Secretary of Energy; the Honorable Mitchell Daniels, Director, Office of Management and Budget; and Mr. Robert Card, President and Chief Executive Officer, Kaiser-Hill Company, L.L.C. We will make copies available to others upon request. Appendix II: Comments From Kaiser-Hill Company, L.L.C. In addition to those named above, Lee H. Carroll, Amy Cram Helwich, Pamela K. Tumler, and Amy E. Webbink made key contributions to this report.
GAO reviewed several aspects of the Department of Energy's (DOE) Rocky Flats Environmental Technology site cleanup and closure plan. Specifically, GAO reviewed (1) the status and cost of the Rocky Flats closure project, (2) the likelihood that the site will be closed by 2006, and (3) the management actions needed, if any, to improve the likelihood of the project's success. GAO found that in the more than five years that it has been the major contractor at the Rocky Flats site, Kaiser-Hill has made significant progress toward cleaning up the site, but the majority of the work--and the most complicated--remains to be done. Because of the project's difficulty, DOE entered into a cost-plus-incentive-fee contract with Kaiser-Hill. If completed on time, the project will cost about $7.5 billion from the signing of the first cleanup contract with Kaiser-Hill in July 1995 through the 2006 closure date, and about $1.4 billion more thereafter, for such activities as site monitoring and maintenance and for contractor employees' retirement benefits. These overall costs will increase if additional work is required or the 2006 target date is not achieved. Kaiser-Hill and DOE are unlikely to meet the December 2006 target closure date. A number of significant and complex challenges must be overcome first. Kaiser-Hill and DOE are developing their respective plans for managing the closure project, but DOE needs to take additional steps to effectively implement its plan.
In February 2012, we reported that the increased seigniorage resulting from replacing $1 notes with $1 coins could potentially offer $4.4 billion in net benefits to the government over 30 years. We determined that seigniorage was the sole source of the net benefits and not lower production costs due to switching to the coin, which lasts much longer than a note. Seigniorage is the financial gain the federal government realizes when it issues notes or coins because both forms of currency usually cost less to produce than their face value. This gain equals the difference between the face value of currency and its costs of production, which reflects a financial transfer to the federal government because it reduces the government’s need to raise revenues through borrowing. With less borrowing, the government pays less interest over time, resulting in a financial benefit. The replacement scenario of our 2012 estimate assumed the production of $1 notes would stop immediately followed by a 4-year transition period during which worn and unfit $1 notes would gradually be removed from circulation. Based on information provided by the Mint, we also assumed that the Mint would convert existing equipment to increase its production capability for $1 coins during the first year and that it would take 4 years for the Mint to produce enough coins to replace the currently outstanding $1 notes. Our assumptions covered a range of factors, but key among these was a replacement ratio of 1.5 coins to 1 note to take into consideration the fact that coins circulate with less frequency than notes and therefore a larger number are required in circulation. Other key assumptions included the expected rate of growth in the demand for currency over 30 years, the costs of producing and processing both coins and notes, and the differential life spans of coins and notes. We projected our analyses over 30 years to be consistent with previous GAO analyses and because that period roughly coincides with the life expectancy of the $1 coin. As shown in figure 1, we found that the net benefit accruing each year varied considerably over the 30 years. More specifically, across the first 10 years of our 30-year analysis, replacing the $1 note with a $1 coin would result in a $531 million net loss or approximately $53 million per year in net loss to the government. The early net loss would be due in part to the up-front costs to the Mint of increasing its coin production during the transition, together with the limited interest expense the government would avoid in the first few years after replacement began. This estimate differs from our 2011 estimate, which found that replacement would result in a net benefit of about $5.5 billion over 30 years (an average of about $184 million per year) because the 2012 estimate takes into account two key actions that occurred since our 2011 report, specifically: In April 2011, the Federal Reserve began using new equipment to process notes, which has increased the expected life of the $1 note to an average of 56 months (or 4.7 years), according to the Federal Reserve, compared with the 40 months we used in our 2011 analysis.over 30 years and thus reduces the expected net benefits of replacing the $1 note with a $1 coin. The longer note life reduces the costs of circulating a note In December 2011, the Treasury Department announced that it would take steps to eliminate the overproduction of dollar coins by relying on the approximately 1.4 billion $1 coins stored with the Federal Reserve as of September 30, 2011, to meet the relatively small transactional demand for dollar coins. This new policy would reduce the cost associated with producing $1 coins that we estimated in the status quo scenario and, therefore, would reduce the net benefit, which is the difference in the estimated costs between the status quo scenario and the replacement scenario. However, like all estimates, there are uncertainties involved in developing these analyses. In particular, while the up-front costs to the Mint of increasing its coin production during the transition is reasonably certain–– in large part because it is closer in time––the longer-term benefits, particularly those occurring in the later years, involve greater uncertainty because of unforeseen circumstances that could occur farther into the future. Nonetheless, looking at a longer time period allows for trends to be seen. Moreover, changes to the inputs and assumptions used in our analysis could significantly change the estimated net benefit. For example, in 2011, we compared our status quo scenario to an alternative scenario in which the growing use of electronic payments—such as making payments with a cell phone—results in a lower demand for cash and lower net benefit. If Americans come to rely more heavily on electronic payments, the demand for cash could grow more slowly than we assumed or even decrease. By reducing the public’s demand for $1 currency by 20 percent in this alternative scenario, we found that the net benefit to the government would decrease to about $3.4 billion over 30 years. In another scenario, we reported in 2012 that if interest savings because of seigniorage were not considered, a net loss of approximately $1.8 billion would accrue during the first 10 years for an average cost of $179 million per year—or $2.8 billion net loss over 30 years. While this scenario suggests that there would be no net benefits from switching to a $1 coin, we believe that the interest savings related to seigniorage, which is a result of issuing currency, cannot be set aside because the interest savings reflects a monetary benefit to the government. Our estimates of the discounted net benefit to the government of replacing the $1 note with a $1 coin differ from the method that the Congressional Budget Office (CBO) would use to calculate the impact on the budget of the same replacement. In the mid-1990s, CBO made such an estimate and noted that its findings for government savings were lower than our estimates at that time because of key differences in the two analyses. Most important, budget scorekeeping conventions do not factor in gains in seigniorage in calculating budget deficits.expense avoided in future years by reducing borrowing needs, which accounts for our estimate of net benefit to the government, would not be part of a CBO budget-scoring analysis. Additionally, CBO’s time horizon for analyzing the budget impact is up to 10 years—a much shorter time horizon than we use in our recent analyses. Two factors merit consideration moving forward. The first factor is the effect of a currency change on the private sector. Our 2011 and 2012 reports considered only the fiscal effect on the government. Because we found no quantitative estimates that could be evaluated or modeled, our estimate did not consider factors such as the broader societal impact of replacing the $1 note with a $1 coin or attempt to quantify the costs to the private sector. Based on our interviews with stakeholders representing a variety of cash-intensive industries, we believe that the costs and benefits to the private sector should be carefully weighed since some costs could be substantial. In 2011 we reported that stakeholders identified potential shorter- and longer-term costs that would likely result from the replacement. Specifically, shorter-term costs would be those costs involved in adapting to the transition such as modifying vending machines, cash-register drawers, and night-depository equipment to accept $1 coins. Such costs would also include the need to purchase or adapt the processing equipment that businesses may need, such as coin- counting and coin-wrapping machines. Longer-term costs would be those costs that would permanently increase the cost of doing business, such as the increased transportation and storage costs for the heavier and more voluminous coins as compared to notes, and processing costs. These costs would likely be passed on to the customer and the public at large through, for example, higher prices or fees. Most stakeholders we interviewed said, however, that they could not easily quantify the magnitude of these costs, and the majority indicated that they would need 1 to 2 years to make the transition from $1 notes to $1 coins. In contrast to the stakeholders who said that a replacement would mean higher costs for their businesses, stakeholders from the vending machine industry and public transit said that the changeover might have only a minimal impact on them. For example, according to officials from the National Automatic Merchandising Association, an organization representing the food and refreshment vending industry, many of its members have already modified their vending machines to accept all forms of payment, including $1 coins. In addition, according to transit industry officials, the impact on the transit industry would be minimal since transit agencies that receive federal funds were required under the Presidential $1 Coin Act of 2005 to accept and distribute $1 coins. The second factor that merits consideration is public acceptance. Our 2012 estimate assumes that the $1 coin would be widely accepted and used by the public. In 2002, we conducted a nationwide public opinion survey, and we found that the public was not using the $1 coin because people were familiar with the $1 note, the $1 coin was not widely available, and people did not want to carry more coins. However, when respondents were told that such a replacement would save the government about half a billion dollars a year (our 2000 estimate), the proportion who said they opposed elimination of the note dropped from 64 percent to 37 percent. Yet, two more recent national-survey results suggest that opposition to eliminating the $1 note persists. For example, according to a Gallup poll conducted in 2006, 79 percent of respondents were opposed to replacing $1 notes with $1 coins, and their opposition decreased only slightly, to 64 percent, when they were asked to assume that a replacement would result in half a billion dollars in government savings each year. We have noted in past reports that efforts to increase the circulation and public acceptance of the $1 coins—such as changes to the color of the $1 coin and new coin designs—have not succeeded, in part, because the $1 note has remained in circulation. Over the last 48 years, Australia, Canada, France, Japan, the Netherlands, New Zealand, Norway, Russia, Spain, and the United Kingdom, among others, have replaced lower-denomination notes with coins. The rationales for replacing notes with coins cited by foreign government officials and experts include the cost savings to governments derived from lower production costs and the decline over time of the purchasing power of currency because of inflation. For example, Canada replaced its $1 and $2 notes with coins in 1987 and 1996, respectively. Canadian officials determined that the conversion to the $1 coin saved the Canadian government $450 million (Canadian) between 1987 and 1991 because it no longer had to regularly replace worn out $1 notes. However, Canadian $1 notes did not last as long as $1 notes in the United States currently do. Stopping production of the note and actions to overcome public resistance have been important in Canada and the United Kingdom as the governments transitioned from a note to a coin. While observing that the public was resistant at first, Canadian and United Kingdom officials said that with the combination of stakeholder outreach, public relations efforts, and ending production and issuance of the notes, public dissatisfaction dissipated within a few years. Canada undertook several efforts to prepare the public and businesses for the transition to the coin. For example, the Royal Canadian Mint reached out to stakeholders in the retail business community to ensure that they were aware of the scope of the change and surveyed public opinion about using coins instead of notes and the perceived impact on consumer transactions. The Canadian Mint also proactively worked with large coin usage industries, such as vending and parking enterprises, to facilitate conversion of their equipment, and conducted a public relations campaign to advise the public of the cost savings that would result from the switch. According to Canadian officials, the $1 and $2 coins were the most popular coins in circulation and were heavily used by businesses and the public. In our analysis of replacing the $1 note with a $1 coin, we assumed that the U.S. government would conduct a public awareness campaign to inform the public during the first year of the transition and assigned a value of approximately $7.8 million for that effort. In addition, some countries have used a transition period to gradually introduce new coins or currency. For example, the United Kingdom issued the £1 coin in April 1983 and continued to simultaneously issue the £1 note until December 1984. Similarly, Canada issued the $1 coin in 1987 and ceased issuing the $1 note in 1989. In our prior reports, we recommended that Congress proceed with replacing the $1 note with the $1 coin. We continue to believe that the government would receive a financial benefit from making the replacement. However, this finding comes with several caveats. First, the costs are immediate and certain while the benefits are further in the future and more uncertain. The uncertainty comes, in part, from the uncertainty surrounding key assumptions like the future demand for cash. Second, the benefits derive from seigniorage, a transfer from the public, and not a cost-saving change in production. Third, these are benefits to the government and not necessarily to the public at large. In fact, public opinion has consistently been opposed to the $1 coin. Keeping those caveats in mind, many other countries have successfully replaced low denomination notes with coins, even when initially faced with public opposition. Chairman Paul, Ranking Member Clay, and members of the Subcommittee, this concludes my prepared statement. I would be pleased to answer any questions at this time. For further information on this testimony, please contact Lorelei St. James, at (202) 512-2834 or [email protected]. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Teresa Spisak (Assistant Director), Lindsay Bach, Amy Abramowitz, Patrick Dudley, and David Hooper. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since coins are more durable than notes and do not need replacement as often, many countries have replaced lower-denomination notes with coins to obtain a financial benefit, among other reasons. Six times over the past 22 years, GAO has reported that replacing the $1 note with a $1 coin would provide a net benefit to the federal government of hundreds of millions of dollars annually. This testimony provides information on what GAO’s most recent work in 2011 and 2012 found regarding (1) the net benefit to the government of replacing the $1 note with a $1 coin, (2) stakeholder views on considerations for the private sector and the public in making such a replacement, and (3) the experiences of other countries in replacing small-denomination notes with coins. This testimony is based on previous GAO reports. To perform that work, GAO constructed an economic model to assess the net benefit to the government. GAO also interviewed officials from the Federal Reserve and Treasury Department, currency experts, officials from Canada and the United Kingdom, and representatives of U.S. industries that could be affected by currency changes. GAO reported in February 2012 that replacing $1 notes with $1 coins could potentially provide $4.4 billion in net benefits to the federal government over 30 years. The overall net benefit was due solely to increased seigniorage and not to reduced production costs. Seigniorage is the difference between the cost of producing coins or notes and their face value; it reduces government borrowing and interest costs, resulting in a financial benefit to the government. GAO’s estimate takes into account processing and production changes that occurred in 2011, including the Federal Reserve’s use of new equipment to determine the quality and authenticity of notes, which has increased the expected life of the note thereby reducing the costs of circulating a note over 30 years. (The $1 note is expected to last 4.7 years and the $1 coin 30 years.) Like all estimates, there are uncertainties surrounding GAO’s estimate, especially since the costs of the replacement occur in the first several years and can be estimated with more certainty than the benefits, which are less certain because they occur further in the future. Moreover, changes to the inputs and assumptions GAO used in the estimate could significantly increase or decrease the results. For example, if the public relies more heavily on electronic payments in the future, the demand for cash could be lower than GAO estimated and, as a result, the net benefit would be lower. In March 2011, GAO identified potential shorter- and longer-term costs to the private sector that could result from the replacement of the $1 note with a $1 coin. Industry stakeholders indicated that they would initially incur costs to modify equipment and add storage and that later their costs to process and transport coins would increase. However, others, such as some transit agencies, have already made the transition to accept $1 coins and would not incur such costs. In addition, for such a replacement to be successful, the $1 coin would have to be widely accepted and used by the public. Nationwide opinion polls over the last decade have indicated lack of public acceptance of the $1 coin. Efforts to increase the circulation and public acceptance of the $1 coins have not succeeded, in part, because the $1 note has remained in circulation. Over the last 48 years, many countries, including Canada and the United Kingdom, have replaced low denomination notes with coins because of expected cost savings, among other reasons. The Canadian government, for example, saved $450 million (Canadian) over 5 years by converting to the $1 coin. Canada and the United Kingdom found that stopping production of the note combined with stakeholder outreach and public education were important to overcome public resistance, which dissipated within a few years after transitioning to the low denomination coins. GAO has recommended in prior work that Congress replace the $1 note with a $1 coin. GAO continues to believe that replacing the $1 note with a coin is likely to provide a financial benefit to the federal government if the note is eliminated and negative public reaction is effectively managed through stakeholder outreach and public education.
In 2001, DOD shifted from a threat-based planning process focused on preparing the department for a set of threat scenarios to a capabilities- based process focused on identifying what capabilities DOD would need to counter expected adversaries. The expectation was that a capabilities- based process would prevent DOD from over-optimizing for a limited set of scenarios. The 2006 Quadrennial Defense Review continued this shift in order to emphasize the needs of the combatant commanders by implementing portfolio management principles for cross-sections of DOD’s capabilities. Portfolio management principles are commonly used by large commercial companies to prioritize needs and allocate resources. In September 2006, DOD initiated a test case of the portfolio management concept, which included DOD’s management of its ISR capabilities. The USD(I) is the lead office for this ISR portfolio, and the ISR Integration Council, a group of senior DOD intelligence officers created as a forum for the services to discuss ISR integration efforts, acts as the governance body for the ISR portfolio management effort. In February 2008, DOD announced its plans to formalize the test cases, including the ISR portfolio, as standing capability portfolio management efforts. DOD established JCIDS as part of its capabilities-based planning process and to be a replacement for DOD’s previous requirements identification process, which, according to DOD, frequently resulted in systems that were service-focused rather than joint, programs that duplicated each other, and systems that were not interoperable. Under this previous process, requirements were often developed by the services as stand-alone solutions to counter specific threats and scenarios. In contrast, the JCIDS process is designed to identify the broad set of capabilities that may be required to address the security environment of the twenty-first century. In addition, requirements under the JCIDS process are intended to be developed from the “top-down,” that is, starting with the national military strategy, whereas the former process was “bottom-up,” with requirements growing out of the individual services’ unique strategic visions and lacking clear linkages to the national military strategy. The BA FCB has responsibilities that include both JCIDS and non-JCIDS activities. The BA FCB provides input on the ISR capability portfolio management test case to the USD(I), who leads the test case and who, in turn, often provides inputs to the BA FCB deliberations on ISR capability needs. The BA FCB also generally provides analytic support for Joint Staff discussions and decisions on joint concepts and programmatic issues. In addition, the BA FCB has responsibilities for helping to oversee materiel and non-materiel capabilities development within JCIDS. To do this, the BA FCB reviews proposals for new ISR capabilities, as well as proposals for non-materiel ISR capabilities and for ISR capabilities already in development, and submits recommendations to the Joint Requirements Oversight Council on whether or not to approve them. To support their proposals for new ISR capabilities, the sponsors are expected to conduct a robust, three-part capabilities-based assessment that identifies (1) warfighter skills and attributes for a desired capability (Functional Area Analysis), (2) the gaps to achieving this capability based on an assessment of all existing systems (Functional Needs Analysis), and (3) possible solutions for filling these gaps (Functional Solution Analysis). According to Joint Staff guidance, the latter assessment should consider the development of new systems, non-materiel solutions that do not require the development of new systems, modifications to existing systems, or a combination of these, as possible solutions to filling identified capability gaps. Figure 1 provides an overview of the JCIDS analysis process as it relates to proposals for new capabilities, showing that these proposals are supposed to flow from top-level defense guidance, including DOD strategic guidance, Joint Operations Concepts, and Concepts of Operations. This guidance is to provide the conceptual basis for the sponsor’s capabilities-based assessment, which ultimately results in the sponsor’s proposal for a new capability. DOD provides ISR capabilities in support of a wide range of defense and non-defense agencies across the intelligence community, creating a complex environment for DOD as it tries to integrate defense and national ISR capabilities. As DOD works to define its ISR capability requirements and improve integration of enterprisewide ISR capabilities, the department is faced with different and sometimes competing organizational cultures, funding arrangements, and requirements processes, reflecting the diverse missions of the many intelligence community agencies that DOD supports. This wide range of DOD ISR enterprise commitments across the U.S. intelligence community presents challenges for DOD as it works to increase ISR effectiveness and avoid unnecessary investments in ISR capabilities. DOD’s ISR enterprise is comprised of many organizations and offices from both the defense intelligence community and the national intelligence community. DOD relies on both its own ISR assets and national ISR assets to provide comprehensive intelligence in support of its joint warfighting force. For example, the National Reconnaissance Office, a DOD agency, provides overhead reconnaissance satellites which may be used by national intelligence community members such as the Central Intelligence Agency. Figure 2 demonstrates that DOD’s ISR enterprise supports a wide range of intelligence community organizations. DOD organizations are involved in providing intelligence information to both the defense and national intelligence communities, using their respective or joint ISR assets. In addition to the intelligence branches of the military services, there are four major intelligence agencies within DOD: the Defense Intelligence Agency; the National Security Agency; the National Geospatial-Intelligence Agency; and the National Reconnaissance Office. The Defense Intelligence Agency is charged with providing all- source intelligence data to policy makers and U.S. armed forces around the world. The Director of the Defense Intelligence Agency, a three-star military officer, serves as the principal intelligence advisor to the Secretary of Defense and the Chairman of the Joint Chiefs of Staff. The National Security Agency is responsible for signals intelligence and has collection sites throughout the world. The National Geospatial-Intelligence Agency prepares the geospatial data, including maps and computerized databases necessary for targeting in an era dependent upon precision- guided weapons. The National Reconnaissance Office develops and operates reconnaissance satellites. Although these are DOD intelligence agencies, all of these organizations nevertheless provide intelligence information to meet the needs of the national intelligence community as well as DOD. The National Reconnaissance Office, in particular, is a joint organization where ultimate management and operational responsibility resides with the Secretary of Defense in concert with the Director of National Intelligence. In addition, the national intelligence community includes agencies such as the Central Intelligence Agency, whose responsibilities include providing foreign intelligence on national security issues to senior policymakers, as well as the intelligence-related components of other federal agencies, all of which have different missions and priorities. For example, the intelligence component of the Department of State is concerned with using intelligence information, among other things, to support U.S. diplomatic efforts, while the intelligence component of the Department of Energy may use intelligence to gauge the threat of nuclear terrorism and counter the spread of nuclear technologies and material. The complex context of different organizational cultures, funding arrangements, requirements processes, and diverse missions of other members of the intelligence community that DOD supports presents a challenge for DOD in integrating its ISR enterprise, as highlighted by previous efforts to achieve greater ISR integration within DOD. Observers have noted in the past that cultural differences between the defense and national intelligence agencies and their different organizational constructs often impede close coordination. For example, Congress found in the past that DOD and the national intelligence community may not be well- positioned to coordinate their intelligence activities and programs, including ISR investments, in order to ensure unity of effort and avoid duplication of effort, and a congressionally chartered commission that reviewed the management and organization of national security space activities—known as the Space Commission—noted that understanding the different organizational cultures of the defense and national space communities is important for achieving long-term integration. Subsequently, in 2003 and 2004, a joint task force of the Defense Science Board observed that there was no procedural mechanism for resolving differences between DOD and the national intelligence community over requirements and funding for national security space programs. In 2005, a private sector organization indicated that DOD and the intelligence community should improve their efforts to enhance information sharing and collaboration among the national security agencies of the U.S. government. In addition, according to the ODNI, the traditional distinction between the intelligence missions of DOD and the national intelligence community have become increasingly blurred since the events of September 11, 2001, with DOD engaging in more strategic missions and the national intelligence community engaging in more tactical missions. Because of this trend, government decision makers have recognized the increased importance of ensuring effective coordination and integration between DOD and the national intelligence community in order to successfully address today’s security threats. Two areas within DOD’s ISR enterprise where coordination between DOD and the national intelligence community are important are: (1) managing funding and budget decisions for ISR capabilities, and (2) developing requirements for new ISR capabilities. DOD has two decision-support processes in place to conduct these functions: its Planning, Programming, Budgeting, and Execution process, and its Joint Capabilities Integration and Development System. However, DOD also coordinates with the Office of the Director of National Intelligence, which uses separate budgeting and requirements identification processes to manage the national intelligence budget. Past DOD efforts to integrate its own ISR activities with those of the national intelligence community have shown the difficulty of implementing organizational changes that may appear counter to institutional culture and prerogatives. For example, in its January 2001 report, the Space Commission made recommendations to DOD to improve coordination, execution, and oversight of the department’s space activities. Among other things, the Space Commission stated that the heads of the defense and national space communities should work closely and effectively together to set and maintain the course for national security space programs—a subset of ISR capabilities—and to resolve differences that arise between their respective bureaucracies. To accomplish this, the Space Commission called for the designation of a senior-level advocate for the defense and national space communities, with the aim of coordinating defense and intelligence space requirements. In response to this recommendation, in 2003 the department assigned to the DOD Executive Agent for Space the role of the Director of the National Reconnaissance Office, and the National Security Space Office was established to serve as the action agency of the DOD Executive Agent for Space. The National Security Space Office received both DOD and National Reconnaissance Office funding and was staffed by both DOD and National Reconnaissance Office personnel. However, in July 2005, the Secretary of Defense split the positions of the National Reconnaissance Office Director and the Executive Agent for Space by appointing an official to once again serve exclusively as the Director of the National Reconnaissance Office, citing the need for dedicated leadership at that agency. The National Reconnaissance Office Director subsequently removed National Reconnaissance Office personnel and funding from the National Security Space Office, and restricted the National Security Space Office’s access to a classified information-sharing network, thereby inhibiting efforts to further integrate defense and national space activities—including ISR activities—as recommended by the Space Commission. In another case, DOD officials stated that, when developing the ISR Integration Roadmap, they had difficulty gaining information to include in the Roadmap on national-level ISR capabilities that were funded by the national intelligence budget. Spending on most ISR programs is divided between the national intelligence budget, known as the National Intelligence Program (NIP), and the defense intelligence budget, known as the Military Intelligence Program (MIP). The NIP consists of intelligence programs that support national decision makers, especially the President, the National Security Council, and the heads of cabinet departments, to include the Department of Defense. The Director of National Intelligence is responsible for developing and determining the annual NIP budget, which, according to the Office of the Director of National Intelligence, amounted to $43.5 billion appropriated for fiscal year 2007. To assist in this task, officials from the Office of the Director of National Intelligence stated that they currently use a framework known as the Intelligence Community Architecture, the focus of which is to facilitate the Office of the Director of National Intelligence’s intelligence budget deliberations by providing a set of repeatable processes and tools for decision makers to make informed investment decisions about what intelligence systems, including ISR systems, to buy. According to officials from the Office of the Director of National Intelligence, they are working with DOD to finalize guidance related to the Intelligence Community Architecture as of January 2008. The MIP encompasses DOD-wide intelligence programs and most intelligence programs supporting the operating units of the military services. The USD(I) is responsible for compiling and developing the MIP budget. To assist in informing its investment decisions for MIP- funded activities, the USD(I) is currently employing an investment approach that is intended to develop and manage ISR capabilities across the entire department, rather than by military service or individual program, in order to enable interoperability of future ISR capabilities and reduce redundancies and gaps. The total amount of the annual MIP budget is classified. Given that DOD provides ISR capabilities to the national intelligence community, some defense organizations within DOD’s ISR enterprise are funded through the NIP as well as the MIP. For example, three DOD intelligence agencies—the National Security Agency, the National Reconnaissance Office, and the National Geospatial-Intelligence Agency— are included in the NIP. While the Director of National Intelligence is responsible for preparing a NIP budget that incorporates input from NIP- funded defense agencies, such as the National Security Agency, National Reconnaissance Office, and National Geospatial-Intelligence Agency, USD(I) has responsibility for overseeing defense ISR capabilities within the NIP as well as within the MIP. The statutorily required guidelines to ensure the effective implementation of the Director of National Intelligence’s authorities, including budgetary authority over defense intelligence agencies, had not been established as of January 2008. In recognition of the importance of coordinated intelligence efforts, the Secretary of Defense and the Director of National Intelligence signed a memorandum of agreement in May 2007 that assigned the USD(I) the role of Director of Defense Intelligence within the Office of the Director of National Intelligence, reinforcing the USD(I)’s responsibility for ensuring that the investments of both the defense and national intelligence communities are mutually supportive of each other’s roles and missions. The specific responsibilities of this position were defined by a January 2008 agreement signed by the Director of National Intelligence, after consultation with the Secretary of Defense, but it is too early to know whether this new position will increase coordination between the defense and national intelligence communities with regard to planning for current and future spending on ISR capabilities. Although DOD and the Office of the Director of National Intelligence have begun working together to coordinate funding mechanisms for joint programs, DOD efforts to ensure funding for major ISR programs that also support national intelligence missions can be complicated when funding for those systems is shared between the separate MIP and NIP budgets. For example, as the program executive for the DOD intelligence budget, the USD(I) is charged with coordinating DOD’s ISR investments with those of the non-DOD intelligence community. A DOD official stated that, as part of the fiscal year 2008 ISR budget deliberations, the USD(I) and the Air Force argued that funding for the Space Based Infrared Radar System and Space Radar satellite systems, which are managed jointly by the Air Force and National Reconnaissance Office, should be shared between the DOD ISR budget and the national intelligence community ISR budget to better reflect that these programs support both DOD and national intelligence priorities. As a result, according to a DOD official, USD(I) negotiated a cost-sharing arrangement with the Director of National Intelligence, and, although the Air Force believed that its funding contribution under the cost-sharing agreement was too high, the Deputy Secretary of Defense ultimately decided that the Air Force would assume the higher funding level. A DOD official stated that the delay in funding for the Space Radar system caused its initial operational capability date to be pushed back by approximately one year. In addition to having separate intelligence budgets, DOD and the Office of the Director of National Intelligence also conduct separate processes to identify future requirements. In DOD, proposals for new ISR capabilities are often developed by the individual services, which identify their respective military needs in accordance with their Title 10 responsibilities to train and equip their forces. Proposals for new ISR capabilities may also be developed by defense agencies or combatant commands. Proposals for new ISR capabilities that support defense intelligence requirements may be submitted through DOD’s JCIDS process, at which time the department is to review the proposals to ensure that they meet the full range of challenges that the services may face when operating together as a joint force. The Office of the Director of National Intelligence has its own separate process, carried out by the Mission Requirements Board, which is intended to serve as the approval mechanism for future national intelligence requirements as well as to provide input on future intelligence capabilities being acquired by DOD that may also support national intelligence community missions. According to officials from both the Office of the Director of National Intelligence and DOD, the process carried out by the Office of the Director of National Intelligence is evolving and is less formalized than DOD’s JCIDS process. These separate ISR requirements identification processes for DOD and the Office of the Director of National Intelligence may present challenges for DOD since there are not yet any standard procedures for ensuring that ISR capability proposals affecting both the defense and national intelligence communities are reviewed in a timely manner by both processes. Although there is coordination between the two processes, DOD officials related that the nature of the relationship between JCIDS and the Mission Requirements Board process is still unclear. Officials from the Office of the Director of National Intelligence confirmed that the structure of their office is still evolving, and therefore no standard process currently exists for determining what DOD capability proposals the Mission Requirements Board will review, or what criteria will be used to conduct such reviews. Officials from the Office of the Director of National Intelligence stated that Mission Requirements Board members exercise their professional judgment on which DOD systems need to be reviewed and whether enough of the capability is already being delivered by existing systems. Although there is a 2001 Director of Central Intelligence directive that establishes the Mission Requirements Board and calls upon it to oversee, in consultation with DOD’s Joint Requirements Oversight Council, the development of requirements documents that are common to both national and joint military operational users, this directive contains no specific criteria for doing so. Officials from the Office of the Director of National Intelligence stated that they are planning to update this 2001 directive on the Mission Requirements Board. Moreover, coordinating the separate requirements processes to ensure that an ISR capability proposal receives timely input on requirements from both DOD and the national intelligence community can be challenging. DOD and the Office of the Director of National Intelligence have not determined systematic procedures or clear guidance for handling situations in which they have different opinions on ISR capability proposals. For example, the Mission Requirements Board did not approve a proposal for a new ISR capability to ensure that the proposal incorporated certain changes, even though DOD had already given its approval to the proposal through the JCIDS process. The unclear nature of the relationship between DOD’s and the Office of the Director of National Intelligence’s ISR requirements identification processes may complicate DOD efforts to develop future ISR systems that provide capabilities across the defense and national intelligence communities. To improve the integration of its ISR investments, DOD has developed two initiatives—the ISR Integration Roadmap and a test case for managing ISR investments as part of a departmentwide portfolio of capabilities. These initiatives are positive steps toward managing ISR investments from an enterprise-level perspective rather than from a service or agency perspective. However, our review has shown that these initiatives do not provide ISR decision makers with two key management tools: (1) a clearly defined vision of a future ISR enterprise that lays out what investments are needed to achieve strategic goals, and (2) a unified investment management approach with a framework that ISR decision makers can use to weigh the relative costs, benefits, and risks of proposed investments using established criteria and methods. Without these key tools, ISR decision makers lack a robust ISR analytical framework they can use to assess different ISR investments in order to identify the best return on investment in light of strategic goals. As a result, senior DOD leaders are not well-positioned to exert discipline over ISR spending to ensure ISR investments reflect enterprisewide priorities and strategic goals. Based on our review and analysis, DOD’s ISR Integration Roadmap does not yet provide (1) a clear vision of a future integrated ISR enterprise that identifies what ISR capabilities are needed to achieve DOD’s strategic goals, or (2) a framework for evaluating tradeoffs between competing ISR capability needs and assessing how ISR capability investments contribute toward achieving those goals. DOD issued the ISR Integration Roadmap in May 2005 in response to a statutory requirement that directed USD(I) to develop a comprehensive plan to guide the development and integration of DOD ISR capabilities. DOD updated the Roadmap in January 2007. As we testified in April 2007, the Roadmap comprises a catalogue of detailed information on all the ISR assets being used and developed across DOD, including ISR capabilities related to collection, communication, exploitation, and analysis. Given the vast scope of ISR capabilities, which operate in a variety of media and encompass a range of intelligence disciplines, the ISR Integration Roadmap represents a significant effort on the part of DOD to bring together information needed to assess the strengths and weaknesses of current ISR capabilities. DOD officials have acknowledged that the Roadmap has limitations and stated that those limitations will be addressed in future revisions. As DOD develops future revisions of the ISR Integration Roadmap, enterprise architecture is a valuable management tool that the department could use to develop a clear vision of a future ISR enterprise and a framework for evaluating tradeoffs between competing ISR needs and assessing how future ISR investments contribute to achieving strategic goals. Our previous work has shown that effective use of enterprise architecture is a hallmark of successful public and private organizations. An enterprise architecture provides a clear and comprehensive picture of that organization, consisting of snapshots of its current (As-Is) state and its target (To-Be) state, and a transition plan for moving between the two states, and incorporates considerations such as technology opportunities, fiscal and budgetary constraints, legacy and new system dependencies and life expectancies, and the projected value of competing investments. DOD and federal guidance on enterprise architecture state that a framework for achieving an integrated enterprise should be based on a clearly defined target architecture, or vision, for a future enterprise derived from an analysis of the organization’s future requirements and strategic goals. A target architecture for the DOD ISR enterprise would (1) describe the structure of the future ISR enterprise and its desired capabilities in a way that is closely aligned with DOD ISR enterprise strategic goals, and (2) include metrics that facilitate evaluating tradeoffs between different investments and periodic assessment of progress toward achieving strategic goals. Since it is likely that the architecture will evolve over time and be revised, it may also include an exploration of alternative investment options, and an acknowledgment of unknown factors. A clearly defined target architecture that depicts what ISR capabilities are required to achieve strategic goals would provide DOD with a framework for assessing its ISR capability gaps and overlaps by comparing its existing ISR capabilities to those laid out in the target architecture. Identified capability gaps and overlaps would be the basis for guiding future ISR capability investments in order to transition the ISR enterprise from its current state toward the desired target architecture. Furthermore, as our previous work has emphasized, resources for investments such as those in ISR capabilities are likely to be constrained by fiscal challenges in the federal budget. By clearly defining what ISR capabilities are required to achieve strategic goals over time, with metrics for assessing progress, an ISR target architecture would provide DOD with a framework for prioritizing its ISR investments when programs are affected by fiscal or technological constraints and an understanding of how changes to investment decisions in response to those constraints affect progress toward achieving strategic goals. The ISR Integration Roadmap does not provide a clearly defined target architecture—or vision—of a future ISR enterprise or a framework for assessing progress toward achieving this vision because, in developing the Roadmap, USD(I) chose to take an incremental approach that limited it to articulating how capabilities already in DOD’s existing ISR budget support strategic goals, rather than developing a longer term, more comprehensive target architecture based on an analysis of ISR capability needs beyond those defined in the existing DOD budget. In doing so, DOD did not fully address the time frame and subject areas listed in the statute. Congress tasked USD(I) to develop a plan to guide the development and integration of DOD ISR capabilities from 2004 through 2018, and to provide a report with information about six different management aspects of the ISR enterprise. However, USD(I) limited the Roadmap to the 5-year period covered by the existing ISR budget, and did not address three of the six areas the statute listed. The three areas listed in the statute that USD(I) did not cover were (1) how DOD intelligence information could enhance DOD’s role in homeland security, (2) how counterintelligence activities of the armed forces and DOD intelligence agencies could be better integrated, and (3) how funding authorizations and appropriations could be optimally structured to best support development of a fully integrated ISR architecture. USD(I) officials stated that due to the difficulty of projecting future operational requirements given ever-changing threats and missions, developing a detailed future ISR architecture beyond the scope of the capabilities already included in the 5-year ISR budget is very challenging. As such, the initial versions of the ISR Integration Roadmap were limited to the existing ISR budget. Due to the limited scope of the ISR Integration Roadmap, it does not present a clear vision of what ISR capabilities are required to achieve strategic goals. In relying on DOD’s existing ISR budget rather than developing a target architecture that details what ISR capabilities are required to achieve strategic goals, the Roadmap does not provide ISR decision makers with a point of reference against which to compare existing DOD ISR assets with those needed to achieve strategic goals. A clearly defined point of reference is needed to comprehensively identify capability gaps or overlaps. This limits the utility of the Roadmap as a basis of an ISR investment strategy linked to achieving strategic goals. For example, the most recent revision of the ISR Integration Roadmap lists global persistent surveillance as an ISR strategic goal but does not define the requirements for global persistent surveillance or how DOD will use current and future ISR assets to attain that goal. The Roadmap states that the department will conduct a study to define DOD’s complete requirements for achieving global persistent surveillance. The study was launched in 2006 but was limited to the planning and direction of ISR assets, which constitutes only one of the six intelligence activities, collectively known as the intelligence process, that would interact to achieve the global persistent surveillance goal. Because the study is limited to only the planning and direction intelligence activity, it will not examine whether there are capability gaps or overlaps in other areas, such as collection systems that include unmanned aircraft systems and satellites, or its intelligence information-sharing systems, and therefore is unlikely to define complete requirements for achieving this strategic goal. While DOD has other analytical efforts that could be used in assessing global persistent surveillance capability needs, these efforts are generally limited in scope to addressing the immediate needs of their respective sponsors. For example, U.S. Strategic Command’s Joint Functional Component Command for ISR conducts assessments of ISR asset utilization and needs. However, these assessments are primarily intended to inform that organization’s ISR asset allocation process, rather than to identify enterprisewide capability gaps with respect to strategic goals. Further, lacking a target architecture, the Roadmap does not provide ISR decision makers a framework for evaluating tradeoffs between competing needs and assessing progress in achieving goals. As figure 3 illustrates, a clearly defined ISR target architecture would serve as a point of reference for ISR decision makers to develop a transition plan, or investment strategy for future ISR capability investments, based on an analysis that identifies capability gaps and overlaps against the ISR capabilities needed to achieve the target architecture, which would be based on DOD ISR strategic goals. Such an analysis would provide ISR decision makers with an underlying analytical framework to (1) quantify the extent of shortfalls, (2) evaluate tradeoffs between competing needs, and (3) derive a set of metrics to assess how future ISR investments contribute to addressing capability shortfalls. With this analytical framework, ISR decision makers at all levels of DOD would have a common set of analytical tools to understand how changing investment levels in different ISR capabilities would affect progress toward achieving goals. This same set of tools could be used by different ISR stakeholders evaluating how proposed ISR capabilities contribute to addressing different gaps or to possibly saturating a given capability area. For example, such a framework would allow ISR decision makers to identify areas where ISR collection capabilities are sufficiently robust or even saturated—areas where further investment may not be warranted given priority needs in other less robust collection areas. Moreover, lacking a target architecture that depicts what capabilities are required to achieve DOD’s strategic goals for the ISR enterprise, the Roadmap does not serve as a guide for the development of future ISR capabilities. A comprehensive source of information on how different ISR capabilities support strategic goals, and relate to other ISR capabilities, would be useful not only to ISR decision makers evaluating tradeoffs between competing needs, but also to program managers developing proposals for new ISR capabilities. Officials responsible for reviewing proposals for new ISR capabilities stated that a long-term vision of a future end state for the ISR enterprise would help sponsors to see what future ISR capabilities DOD needs and how their needs align with DOD’s strategic goals. For example, officials from DOD’s National Signatures Program said that, although they had a clear program goal in mind when developing their proposal for this new ISR capability, they experienced difficulty in developing an architecture because they lacked a comprehensive source of information to assess the full range of DOD and non-DOD databases and ISR assets that their proposed program would need to support. Instead, these officials had to conduct an ad hoc survey of the ISR community, primarily in the form of meetings with other groups that maintained signatures databases, to ensure their program would be sufficiently interoperable with other information-sharing networks and ISR sensors. Without a clearly defined target architecture for the ISR enterprise, DOD lacks an analytical framework for conducting a comprehensive assessment of what investments are required to achieve ISR strategic goals, or for prioritizing investments in different areas when faced with competing needs. Instead of providing an underlying analytical framework, the ISR Integration Roadmap simply lists capability gaps that exist with respect to DOD ISR strategic objectives, and depicts ISR capability investments already in the DOD ISR budget as fully meeting those capability shortfalls. For example, the Roadmap lists as an ISR strategic goal the achievement of “horizontal integration of intelligence information,” which is broadly defined as making intelligence information within the defense intelligence enterprise more accessible, understandable, and retrievable. The Roadmap then lists a variety of ISR investments in DOD’s 5-year ISR budget as the means of achieving this strategic goal. For example, one of these investments is the Distributed Common Ground System, a major DOD intelligence information-sharing network that spans the entire DOD intelligence community. However, the Roadmap does not present an analysis to facilitate evaluation of tradeoffs in that it does not quantify how the Distributed Common Ground System and other DOD information- sharing networks fall short of meeting the “horizontal integration of intelligence information” strategic goal, nor does it examine the extent to which some aspects of that capability area may in fact be saturated. Furthermore, the Roadmap does not prioritize investments in the Distributed Common Ground System with other major investments intended to achieve this strategic goal, or define their interrelationships. Finally, the Roadmap does not provide metrics to allow decision makers to assess how these investments contribute to achieving the “horizontal integration of intelligence information” strategic goal. For example, if the Distributed Common Ground System were to face fiscal or technological constraints, ISR decision makers would not have the information needed to assess what the impact would be on ISR strategic goals if it should not achieve those capability milestones as envisioned in the Roadmap. As a result, ISR decision makers cannot assess how new ISR capabilities would contribute to elimination of whatever capability gaps exist in that area, determine the most important gaps to fill, or make tough go/no-go decisions if those capabilities do not meet expectations. While DOD’s ISR portfolio management effort is intended to enable the department to better integrate its ISR capabilities, it does not provide a framework for effectively evaluating different ISR investment options or clearly empower the ISR portfolio manager to direct ISR spending. As a result, DOD is not well-positioned to implement a unified investment approach that exerts discipline over ISR investments to ensure they reflect enterprisewide priorities and achieve strategic goals. In September 2006, the Deputy Secretary of Defense decided to bring ISR systems across the DOD together into a capability portfolio as part of a test case for the joint capability portfolio management concept. Under this concept, a group of military capabilities, such as ISR capabilities, is managed as a joint portfolio, in order to enable DOD to develop and manage ISR capabilities across the entire department—rather than by military service or individual program—and by doing so, to improve the interoperability of future capabilities, minimize capability redundancies and gaps, and maximize capability effectiveness. The USD(I) was assigned as the lead office for this ISR portfolio, which is known as the battlespace awareness portfolio. As the portfolio manager for ISR investments, the role and authorities of the USD(I) are limited to two primarily advisory functions: (1) USD(I) is given access to, and may participate in, service and DOD agency budget deliberations on proposed ISR capability investments, and (2) USD(I) may recommend that service and DOD agency ISR spending be altered as part of the established DOD budget review process. Under this arrangement, USD(I)’s recommendations represent one of many points of view that are considered by the Deputy Secretary of Defense and other DOD offices involved in reviewing and issuing budget guidance, and therefore USD(I) lacks the ability to ensure ISR spending reflects enterprisewide priorities to achieve strategic goals. Our previous work on portfolio management best practices has shown that large organizations, such as DOD’s ISR enterprise, are most successful in managing investments through a single enterprisewide approach. Further, to be effective, portfolio management is enabled by strong governance with committed leadership, clearly aligned organizational roles and responsibilities, and portfolio managers empowered to determine the best way to invest resources. To achieve a balanced mix of programs and ensure a good return on their investments, successful large commercial companies that we have reviewed take a unified, enterprise- level approach to assessing new investments, rather than employing multiple, independent initiatives. They weigh the relative costs, benefits, and risks for proposed investments using established criteria and methods, and select those investments that can best move the company toward meeting its strategic goals and objectives. Their investment decisions are frequently revisited to ensure products are still of high value, and if a product falls short of expectations, they make tough go/no-go decisions. We have previously recommended that DOD establish portfolio managers who are empowered to prioritize needs, make early go/no-go decisions about alternative solutions, and allocate resources within fiscal constraints. However, since DOD is still developing the capability portfolio management effort, it has not fully defined the role of the portfolio managers or their authority over spending. DOD’s September 2006 guidance on the implementation of the portfolio management test case discusses options for increased authority over spending for the portfolio managers. Nevertheless, USD(I) and DOD officials involved in the implementation of the portfolio management effort stated that DOD views the role of the portfolio managers primarily as providing an assessment of spending in their respective portfolio areas independent of the analysis offered by the military services in support of their ISR spending proposals. If USD(I)’s portfolio management role is limited to an advisory function as DOD moves forward in implementing its portfolio management effort, situations where senior DOD officials must evaluate the merits of alternate analyses that advocate different solutions to ISR capability needs are likely to continue to arise. A robust ISR analytical framework based on a well-defined ISR target architecture would establish a common methodology and criteria, as called for by portfolio management best practices, that is agreed upon by the various ISR stakeholders and that can be used for conducting a data-driven assessment of different ISR capability solutions. For example, as part of fiscal year 2008 ISR budget deliberations, USD(I) conducted an analysis of planned increases in fiscal year 2008 funding to procure more Predator unmanned aircraft systems in order to meet U.S. Central Command’s need for increased surveillance capability. U.S. Central Command and the Air Force conducted an analysis that was based on validating the requirement for more aircraft, rather than on examining potential efficiencies in other aspects of employing them. As the ISR portfolio manager, USD(I)’s analysis focused on identifying opportunities for increased efficiencies in how existing Predators were being employed in surveillance missions. USD(I) determined, among other things, that Predator support to deployed forces was not being maximized because each ground control station could only operate one Predator aircraft at a time, resulting in gaps in the coverage of a target as Predator aircraft rotated to and from the launch area. On the basis of this analysis, USD(I) concluded that planned increases in fiscal year 2008 Predator spending may not be the best, or only, solution to U.S. Central Command’s need for more surveillance capability; instead, the solution should include additional Predator ground control stations, or the tasking of other ISR assets in situations where a Predator would have longer transit times to and from the target area. The ISR Integration Council agreed with the USD(I)’s recommendation. Ultimately, the Deputy Secretary of Defense, who makes final decisions on changes advocated by the ISR portfolio manager, included the increase in Predator aircraft spending in the fiscal year 2008 budget. However, lacking a single, agreed-upon framework within the ISR enterprise for evaluating the merits of the alternate analyses advocating different capability solutions, DOD officials did not have the benefit of a single, authoritative analysis that identified the best return on investment of these different ISR investment options in light of strategic goals and validated requirements. Given USD(I)’s limited authority as the ISR capability portfolio manager, and the lack of a framework for effectively evaluating alternate investment plans, DOD is constrained in its ability to implement an enterprise-level, unified investment approach that employs a single set of established criteria to ensure its ISR investments reflect enterprisewide priorities and strategic goals. DOD has not implemented key activities within the JCIDS process to ensure that proposed new ISR capabilities are filling gaps, are not duplicative, and use a joint approach to addressing warfighters’ needs. The services and DOD organizations that sponsored most of the JCIDS proposals for new ISR capabilities since 2003 have not conducted comprehensive assessments, and the BA FCB has not fully conducted key oversight activities. Specifically, our review of 19 proposals for new ISR capabilities that sponsors submitted to the BA FCB since 2003 showed that 12 sponsors did not complete the capabilities-based assessment of current and planned ISR systems called for by Joint Staff policy in order to identify possible solutions to meet warfighters’ needs. We also found that, for the 7 sponsors who did conduct these assessments, the assessments varied in completeness and rigor. Moreover, we found that the BA FCB did not systematically coordinate with the sponsors during the sponsors’ assessment process to help ensure the quality of the assessments, and did not generally review the assessments once they were completed. As a result, DOD lacks assurance that ISR capabilities approved through JCIDS provide joint solutions to DOD’s ISR capability needs and are the solutions that best minimize inefficiency and redundancy. Joint Staff policy and guidance implementing the JCIDS process, as well as a significant DOD study on defense capabilities, indicate the importance of analyzing capability needs from a crosscutting, department-level perspective to enable a consistent view of priorities and acceptable risks. Specifically, Joint Staff policy on the JCIDS process calls for sponsors to use a robust analytical process to ensure that the proposed ways to fill capability gaps are joint and efficient to the maximum extent possible. This analytical process is known as a capabilities-based assessment, and Joint Staff policy and guidance specify that a capabilities-based assessment should include an analysis of the full range of existing and developmental ISR capabilities to confirm whether a shortcoming in mission performance exists, and of possible ways to fix those shortcomings, such as modifications to existing systems and the use of national-level systems. Nonetheless, Joint Staff guidance also notes that the breadth and depth of a capabilities-based assessment must be tailored to suit the issue, due to the wide array of issues considered as part of the capabilities-based assessment process. Despite Joint Staff policy that calls for capabilities-based assessments, we found that 12 sponsors—almost two-thirds—did not carry out capabilities- based assessments to identify the ISR capabilities that they proposed to the Joint Staff as ways to meet warfighters’ needs. Figure 4 lists the 19 ISR capability proposals we reviewed and specifies which proposals were supported by capabilities-based assessments. Figure 4 also shows that three of the proposals that lacked capabilities-based assessments were ones that DOD expected to cost more than $365 million for research, development, test and evaluation, or more than $2.190 billion for procurement, using fiscal year 2000 constant dollars. The 12 sponsors that did not conduct capabilities-based assessments, as called for under the JCIDS process, cited the following reasons for not doing them: Sponsors decided to use pre-existing analysis as an alternative to the capabilities-based assessment. Many of the sponsors that did not conduct formal capabilities-based assessments nevertheless based their proposals for new ISR capabilities on other forms of analysis or pre-existing mission needs statements. For example, Air Force sponsors stated that they supported their ISR capability proposal with analysis conducted in 1998 and 1999 and a mission needs statement from 2002, before JCIDS was implemented, while National Security Agency sponsors used the results of a substantial analytical effort they had completed just prior to the implementation of JCIDS in 2003. We did not evaluate these alternative types of analysis because they were not required to take the form of capabilities-based assessments as called for by Joint Staff policy and guidance on JCIDS. Sponsors had developed the capabilities prior to the implementation of JCIDS. Two Air Force proposals, both submitted to the Joint Staff in 2004, lacked capabilities-based assessments and, according to the sponsors of each, the Air Force had previously developed ISR systems that were similar to those described in their proposals prior to the implementation of JCIDS. Once JCIDS was implemented, the sponsor sought to obtain Joint Staff approval through the new process; since their ISR systems were already in development and pre-JCIDS analysis may have been conducted, the sponsors did not conduct the capabilities-based assessments. Other sponsors that had developed ISR systems prior to JCIDS being implemented nevertheless conducted capabilities-based assessments when they submitted their proposals. For example, one sponsor developed its proposal and performed its assessment at least 2 years after its organization officially established the program, and another sponsor’s proposal was for a capability to be delivered through an upgrade of an aircraft developed in the late 1960s. These sponsors also sought approval for their ISR systems through the new JCIDS process, but since their systems were already in development, our review showed that these sponsors’ capabilities- based assessments indicated they had the solution already in mind when conducting the assessments. Sponsors developed the capabilities through DOD processes other than JCIDS. Joint Staff policy allows for sponsors to develop a new capability through processes other than JCIDS and then later submit it to the Joint Staff for approval through JCIDS. For example, one sponsor said that it did not perform an assessment prior to developing its proposal because the service originally developed and validated the proposed capability through a technology demonstration process separate from the JCIDS process. Sponsors lacked clear guidance on the JCIDS process, including how to conduct a capabilities-based assessment. One Air Force sponsor that submitted an ISR capability proposal in 2005 said that the Joint Staff policy implementing the JCIDS process was relatively new at the time, and did not contain clear guidance about how to conduct a capabilities- based assessment. Another sponsor did not conduct an assessment because the ISR capability it sought to develop was not a system, but rather a way of carrying out ISR-related activities, and it believed that, in such cases, a capabilities-based assessment was not expected. Sponsors had limited time and resources in which to carry out a capabilities-based assessment. Two sponsors cited lack of resources, including time, as a reason for not conducting a capabilities-based assessment. In one of these cases, the sponsor noted that conducting a capabilities-based assessment would not likely have resulted in a different type of capability being proposed to the Joint Staff. Our review found that 7 of the 19 sponsors conducted capabilities-based assessments, but these assessments varied in rigor and completeness. For example, 4 of these 7 sponsors did not include the cost information called for by Joint Staff guidance and 1 sponsor completed only one phase of the capabilities-based assessment. Figure 5 shows the 7 sponsors that did conduct capabilities-based assessments in support of their proposals and the extent to which these assessments contained elements called for by Joint Staff policy and guidance. We assessed these proposals as lacking an element called for by Joint Staff policy and guidance when our document review of the sponsor’s capabilities-based assessment found no evidence of the element. Additional information about our methodology for conducting this analysis is contained in appendix I. The majority of the seven capabilities-based assessments that we reviewed did not consider the full range of existing ISR capabilities, including the use of national systems, such as satellites, as potential ways to fill identified shortcomings. For example, only one assessment documented that the sponsor had considered the use of national systems. Specifically, one Air Force sponsor’s capabilities-based assessment showed consideration of the use of satellites to assist in quickly sending intelligence information gathered by unmanned aircraft systems to the warfighter in theater. The remaining six sponsors did not demonstrate in their capabilities-based assessments that they had fully assessed the use of national systems, although two of the assessments addressed capabilities that were unlikely to utilize national systems as potential solutions, such as a foreign language translation capability and an intelligence database. The sponsors who did not fully assess the potential for national systems to fill gaps gave a number of reasons for this. Navy sponsors of a manned platform told us that satellites were not included among the ways that they considered to fill capability gaps because the personnel conducting the assessment did not possess the appropriate security clearances needed to evaluate national systems and because of lack of time. Moreover, Marine Corps sponsors reported that neither of their two unmanned aircraft system capability proposals fully evaluated the use of satellites as potential ways to meet ISR needs because they assumed that satellites could not be quickly re-tasked to support the tactical user and lacked the imagery quality needed. In one of their assessments, they noted that satellite data, when available, are not responsive enough to the tactical user due to the long processing time, and that tactical users of satellite data also face challenges resulting from lack of connectivity between the systems that provide these data. In the other assessment, Marine Corps sponsors stated that one of their assumptions in conducting the analysis was that satellites, as well as theater-level unmanned aircraft systems, would not be available to support Marine Corps tactical operations. All seven sponsors that conducted capabilities-based assessments considered the capacity of some existing and developing systems to meet capability gaps, but none documented in their assessments whether and how these systems could be modified to fill capability gaps—a potentially less expensive and less time-consuming solution than developing a new system. In some cases, DOD achieved efficiencies by combining related acquisition programs, although these actions were not the result of sponsors proactively seeking reduced overlap and duplication. For example, in the capabilities-based assessment for one of its two unmanned aircraft systems, Marine Corps sponsors identified several solutions with the potential to provide an ISR capability using existing or planned assets. Identified solutions included relying on or adopting systems provided by other services. In this case, the sponsors did not propose modifications to any existing systems as potential solutions or demonstrate that they considered leveraging the capabilities resident in a similar Navy unmanned aircraft system. The Joint Staff approved this proposal and Marine Corps officials plan to develop a new system that addresses Marine Corps warfighting requirements for vertical takeoff and landing capability for use on ships. In contrast, in another case involving a proposed capability sponsored by the Marine Corps, at the direction of the Assistant Secretary of the Navy for Research, Development, and Acquisition, the Marine Corps combined its unmanned aircraft system program with a different Navy effort to form a single acquisition program, with the goal of producing an integrated and interoperable solution, reducing costs, and eliminating overlap and duplication of development efforts. In this case, the JCIDS process did not help to identify the potential for collaboration on similar ISR capabilities. The majority of sponsors’ capabilities-based assessments that we reviewed did not mention redundancies that existed or might result from the development of their proposed new ISR capabilities. Specifically, only three of the seven sponsors demonstrated that they had considered potential redundancies in ISR capabilities when conducting their assessments. For example, the Defense Intelligence Agency sponsor of a proposal to develop a database cited the need to reduce redundant data systems as a reason for its proposed capability. In addition, a Marine Corps sponsor noted in its capabilities-based assessment that existing ISR systems are experiencing overlaps in five capability areas related to identification, monitoring, and tracking. Despite these examples of identified redundancies in existing ISR capabilities, all of the sponsors concluded that important capability gaps still existed and submitted proposals that supported the development of a new ISR capability. The seven sponsors of the capabilities-based assessments that were not thorough and complete provided similar reasons as those provided by the sponsors that did not conduct capabilities-based assessments at all—for example, a shortage of time and resources and confusion about what was required under the JCIDS process. In addition, some sponsors had already developed a capability, or had the intended solution in mind, when conducting their capabilities-based assessments. Moreover, sponsors that conducted the assessments were hindered by a lack of comprehensive information on existing and developmental ISR capabilities that might potentially be used to fill the identified capability gap, and so could not use this information to fully inform their assessments. Several sponsors that conducted assessments told us that they faced challenges in identifying the full range of existing and developmental-stage ISR systems, in part because no centralized source of information existed. For example, Army sponsors of a language translation capability said that, despite use of personal connections and outreach to identify existing and developmental technologies, it was only after they had finished their capabilities-based assessment that they learned of a particular ISR technology that could have informed their assessment. Sponsors agreed that a source of readily available information on existing and developmental ISR capabilities would be useful. Although the BA FCB’s mission includes engaging in coordination during the sponsors’ assessment process and providing oversight of potential solutions to achieve optimum effectiveness and efficiency in ISR capability development, the BA FCB did not systematically coordinate with the sponsors to help ensure the quality of their capabilities-based assessments, nor did it routinely review those assessments once they were completed. The BA FCB did not implement these activities because it lacks a readily available source of information that identifies all ISR capabilities that would serve as a tool for reviewing the efficiency of sponsors’ assessments, and because the BA FCB does not have a monitoring mechanism, which could ensure that key oversight activities are fully implemented, as described in Joint Staff policy. In addition, BA FCB officials said that they lack adequate numbers of dedicated, skilled personnel to engage in early coordination with the sponsors and review the sponsors’ capabilities-based assessments. As a result, DOD cannot be assured that ISR capabilities approved through JCIDS provide joint solutions to DOD’s ISR capability needs and are the solutions that best minimize inefficiency and redundancy. As described in Joint Staff policy, each Functional Capabilities Board’s mission is to provide assessments and recommendations to enhance capabilities integration, examine joint priorities among existing and future programs, minimize duplication of effort throughout the services, and provide oversight of potential solutions to achieve optimum effectiveness and efficiency. Moreover, Joint Staff policy states that each Functional Capabilities Board’s functions include assisting in overseeing capabilities development within JCIDS through assessment of proposals for new or improved capabilities. The BA FCB is the Functional Capabilities Board that holds responsibility for the ISR functional area and, as such, is responsible for seeking to ensure that the joint force is best served throughout the JCIDS process. Additionally, Joint Staff policy calls on each Functional Capabilities Board and its working group to perform coordination functions within its respective capability area, to include (1) engaging in coordination throughout the sponsors’ assessment process in order to promote cross-service efficiencies, and (2) coordinating and integrating departmentwide participation to ensure that sponsors’ assessments adequately leverage the expertise of the DOD components to identify promising solutions. Through these assessment and coordination functions, as well as other feedback avenues, the BA FCB provides the analytical underpinnings in support of the Chairman of the Joint Chiefs of Staff’s Joint Requirements Oversight Council. After assessing proposals and coordinating departmentwide participation, the BA FCB then makes recommendations on ISR capability proposals to the Chairman of the Joint Chiefs of Staff in order to assist in the Chairman’s task of identifying and assessing the priority of joint capabilities, considering alternatives to acquisition programs, and ensuring that the priority of joint capabilities reflects resource levels projected by the Secretary of Defense. Despite its coordination role, the BA FCB did not routinely engage in early coordination with sponsors to communicate information necessary to ensure comprehensive and rigorous analysis and to ensure that sponsors were aware of other organizations’ and services’ existing and developmental ISR capabilities. Our review showed that the BA FCB did not coordinate with five of the seven sponsors while they were conducting their capabilities-based assessments, although Joint Staff policy calls upon the BA FCB to do so in order to promote efficiencies in ISR capability development and to ensure that sponsors’ assessments adequately leverage the expertise of the DOD components to identify promising solutions. The five sponsors told us that they coordinated with the BA FCB only after they had submitted their completed ISR capability proposals to the BA FCB. Of the remaining two sponsors, one had minimal interaction with the BA FCB, while the other was in contact with a member of the BA FCB working group while conducting the capabilities-based assessment. Once the BA FCB received copies of these ISR capability proposals, it did facilitate departmentwide participation by serving as a forum where DOD components formally commented on ISR capability proposals. Sponsors are nevertheless responsible for addressing and resolving these comments. For example, during the commenting process for an Army proposal for a language translation capability, the National Security Agency expressed disagreement, commenting that the Army proposal omitted practical descriptions of how the technology would be achieved and did not address policy and programming issues that it believed were the underlying cause of the capability gap. Thus, although the BA FCB oversaw the commenting process and provided the forum in which this discussion took place, the Army and the National Security Agency resolved their disagreement by revising the proposal with limited Joint Staff involvement. Furthermore, the BA FCB did not systematically review the quality of the sponsors’ capabilities-based assessments. Although the BA FCB is not required by Joint Staff policy and guidance to review the sponsors’ capabilities-based assessments, such a review would serve as a means of providing oversight of potential solutions to achieve optimum effectiveness and efficiency—a key BA FCB task. Moreover, the lack of early coordination to ensure the quality of the sponsors’ assessments makes the review of the completed assessments an important tool for enhancing capabilities integration and minimizing redundancies. BA FCB members noted that sponsors’ analysis can and does take a variety of forms, including studies that were done on related topics but were not initially intended to support the ISR capability proposal. Members of the BA FCB stated that they look for evidence of analysis underpinning the ISR capability proposal, and if analysis has been conducted, they generally consider it sufficient. However, BA FCB officials also told us that they generally do not review sponsors’ capabilities-based assessments when evaluating proposals for new ISR capabilities. We found that, of the seven capabilities-based assessments that the sponsors conducted, the BA FCB obtained copies of six, which were proactively provided to them by the sponsors. For the one remaining capabilities-based assessment, the sponsor reported that it did not provide copies of its assessment and the BA FCB did not request them. In addition, the BA FCB did not obtain or systematically review any alternative types of analysis that were used in place of a capabilities-based assessment by the other sponsors that did not conduct capabilities-based assessments. In all of these cases, the BA FCB neither requested copies of the analysis, nor did the sponsor proactively provide its alternative type of analysis. The BA FCB did not effectively oversee the process for developing future ISR capabilities by ensuring the implementation of existing guidance related to oversight activities, such as coordination with sponsors and reviews of assessments, for three key reasons. First, the BA FCB has not developed tools to enable systematic review of sponsors’ capabilities- based assessments. Specifically, the BA FCB lacks a comprehensive source of information, augmenting the ISR Integration Roadmap, that would identify the full range of existing and developmental ISR capabilities within the ISR enterprise and serve as a tool for assessing the jointness and efficiency of the sponsors’ proposed ISR solutions. Although BA FCB officials agreed that knowing the full range of existing and developmental ISR capabilities would be useful in reviewing sponsors’ ISR capability proposals, no such complete and up-to-date source of information currently exists. Without readily available information about existing and developmental ISR capabilities, the BA FCB is limited in its ability to systematically review sponsors’ capabilities-based assessments to promote cross-service efficiencies in ISR capability development and to conduct oversight of potential solutions to achieve optimum effectiveness and efficiency. Moreover, the majority of the sponsors that conducted assessments said they could not be certain that they had gathered all relevant information to inform their respective assessments, stating that their efforts to obtain information on existing and developmental ISR capabilities were not systematic and often dependent on the use of personal contacts. Some sponsors did take steps to identify existing DOD ISR capabilities when conducting their assessments, such as reviewing a JCIDS database containing other ISR capability proposals and contacting others, both within and outside of their organizations, about potentially related ISR capabilities. Nonetheless, the JCIDS database only contains information on proposals submitted to the Joint Staff, not on existing and developmental ISR capabilities that have been developed and fielded through DOD processes other than JCIDS. In the absence of a comprehensive source of information and early coordination to facilitate the sharing of such information from the BA FCB to the sponsors, sponsors drew from incomplete informational sources when conducting their capabilities-based assessments and sponsors became aware of shortfalls late in the review process. For example, one sponsor said its proposal passed through two levels of Joint Staff review before the sponsor was asked, at the final level of review, whether leveraging a particular technology had been considered as a potential way to fill an identified capability gap; the technology had not been considered because the sponsor was not aware of it. In another case, a request from a high- level Joint Staff official later in the review process resulted in a Navy sponsor and the BA FCB conducting an ad hoc effort, after the development of the proposal, to research and develop a list of all DOD’s ISR capabilities and demonstrate that a relevant capability gap existed. Second, the BA FCB does not have the ability to effectively oversee the process for developing future ISR capabilities because there is no monitoring mechanism to ensure that key activities—such as early coordination between sponsors and the BA FCB to facilitate the sharing of information relevant to the sponsors’ assessments, and BA FCB review of the assessments—are fully implemented. Standards for internal control in the federal government provide a framework for agencies to achieve effective and efficient operations and ultimately to improve accountability. One of these standards requires that monitoring, such as supervisory activities, should assess the quality of performance over time. Specifically, managers should (1) identify performance gaps by comparing actual performance and achievements to planned results, and (2) determine appropriate adjustments to program management, accountability, and resource allocation in order to improve overall mission accomplishment. To this end, managers should use both ongoing monitoring activities as well as separate evaluations to identify gaps, if any, in performance. Without the development of a monitoring mechanism to ensure implementation of key activities, the BA FCB may not be well- positioned to carry out its oversight of new ISR capabilities as called for by existing Joint Staff guidance. Third, BA FCB staff said that they lack adequate numbers of dedicated personnel with engineering expertise to engage in early coordination with sponsors and review the capabilities-based assessments that support the ISR capability proposals. For example, BA FCB officials related that they have 12 authorized positions to carry out the BA FCB’s responsibilities, but, as of early December 2007, they had 7 assigned personnel— representing a fill rate of 58 percent—with only 4 or 5 of these devoted full-time to BA FCB duties. BA FCB officials also stated that representatives from DOD components who attend BA FCB meetings in order to provide comments on new ISR capability proposals generally do so as a collateral duty, while other components may not send a regularly attending representative. Because the representatives who attend sometimes vary from meeting to meeting and are attending only as a collateral duty, BA FCB officials expressed concern about the ability of the BA FCB to most effectively review proposals for new ISR capabilities. Moreover, in addition to reviewing proposals for new ISR capabilities, BA FCB officials have additional responsibilities, such as reviewing other JCIDS documents for ISR capabilities that are in more advanced stages of development and in obtaining feedback from combatant commanders on warfighter needs. Determining the necessary workforce skills and competencies for achieving current and future needs is a key function of workforce planning. Without an assessment of the BA FCB’s capabilities to perform its oversight activities related to the review of new ISR capability proposals and coordination with the sponsors, the BA FCB may not be well-positioned to fully carry out the task of promoting efficiencies in ISR capability development. Furthermore, Joint Staff officials stated that although the BA FCB has coordination and oversight responsibilities, it lacks the ability to correct stovepiped efforts that it identifies through the JCIDS process. For example, BA FCB officials described a recent case in which two proposals for similar environmental capabilities were submitted to the BA FCB by different sponsors. However, the BA FCB does not have the ability to require these two sponsors to work together on their respective capability proposals or to combine them, according to Joint Staff officials. Despite this, a Joint Staff official said the BA FCB is currently coordinating with these sponsors to try to increase efficiencies. The Joint Requirements Oversight Council approved both proposals, while directing the sponsors of each to work with a designated board to examine ways to make the programs more efficient, such as combining them. In addition, the sponsors have preliminarily agreed to merge their respective ISR programs during the next phase of the acquisition process. We are currently conducting a separate review of the JCIDS process that focuses on the extent to which the process has improved outcomes in weapons system acquisition programs, including structural factors, if any, that affect DOD’s ability to prioritize and balance capability needs. We expect our report based on this review to be issued later in 2008. Since the BA FCB did not conduct key oversight activities, including early coordination with sponsors and review of their assessments, neither the BA FCB nor the sponsors can be assured that the sponsors’ assessments have considered the full range of potential joint solutions to minimize inefficiency and redundancy in ISR capability development—a key aim of the JCIDS process. Moreover, without a readily available source of information about all existing and developmental ISR capabilities that might potentially fill a gap, the BA FCB and the sponsors lack a tool to facilitate departmentwide efficiencies when reviewing proposed ISR capabilities. Accordingly, the process for developing future ISR capabilities may not ensure identification of joint solutions for requirements. The BA FCB recommendations inform which ISR capability proposals are ultimately approved by the Chairman of the Joint Chiefs of Staff as being essential to DOD’s ability to fight and win future wars. After the Chairman approves ISR capability proposals, the military services and DOD organizations may begin the process of developing and acquiring the systems that deliver the validated capability. The systems, once acquired, will likely deliver capabilities not only to the warfighter, but also to the broader national intelligence community. Without effective oversight of ISR capability development, efficient solutions are likely to go unidentified, while new programs continue to move through development without sufficient knowledge, potentially resulting in unnecessary investment or cost increases and schedule delays further in the acquisition process that affect the entire ISR enterprise. As sponsors of proposed ISR capabilities each currently plan unique solutions to their similar needs, oversight is key to achieving efficiencies among proposed ISR capabilities at the outset of the capability development process. Congress and DOD have consistently emphasized the importance of DOD integrating its ISR activities across the defense and national intelligence components of the ISR enterprise. Increased integration of the ISR enterprise would help minimize capability redundancies and gaps and maximize capability effectiveness by improving communication across the defense and intelligence communities to leverage common investments for common missions. Although DOD has taken steps to improve the integration of ISR investments—such as by issuing the ISR Integration Roadmap and managing a departmentwide portfolio of ISR capabilities— these initiatives do not provide ISR decision makers with a clear vision of a future ISR enterprise and a unified investment approach to achieve that vision. Without a clear vision and a unified investment approach, ISR decision makers lack the key management tools they need to comprehensively identify what ISR investments DOD needs to make to achieve its strategic goals, evaluate tradeoffs between competing needs, and assess progress in achieving strategic goals. Thus, USD(I) and other senior DOD officials are not well-positioned to meet future ISR needs in a more integrated manner by exerting discipline over ISR spending to ensure progress toward strategic goals. Moreover, a long-term vision of a future ISR enterprise, consisting of a well-defined target architecture that depicts what ISR capabilities are needed to support strategic goals, would be useful not only to ISR decision makers evaluating tradeoffs between competing needs but also to sponsors developing proposals for new ISR capabilities. Without readily available information on existing and developmental ISR capabilities to assist the sponsors in developing the assessments and the BA FCB in reviewing them, neither the sponsors nor the BA FCB can be assured that these assessments have considered the full range of potential joint solutions to minimize inefficiency and redundancy in ISR capability development. Further, without a monitoring mechanism to ensure implementation of Joint Staff policy calling for early coordination between the BA FCB and the sponsors and for completion of capabilities-based assessments, the Joint Requirements Oversight Council may not receive complete assessments to support its decisions about the most efficient and effective proposed ISR capabilities to meet defense and national intelligence needs. Additionally, without consistent early coordination and thorough reviews of assessments, sponsors participating in DOD’s requirements identification process may not have an incentive to conduct thorough assessments and may focus their proposals on their individual needs without fully ensuring identification of joint solutions for requirements. Finally, without a needs assessment that reviews the BA FCB’s staffing levels, expertise, and workload to engage in early coordination with sponsors and review capabilities-based assessments and a plan, if needed, that addresses any identified shortfalls, the BA FCB may not be well-positioned to conduct oversight of potential ISR solutions to achieve optimum effectiveness and efficiency. Thus, DOD cannot be assured that it is developing the optimal mix of ISR capabilities to achieve its goals of better integrating the ISR enterprise. We recommend the Secretary of Defense take the following four actions: Direct the Under Secretary of Defense for Intelligence to develop a vision of a future ISR architecture that addresses a longer period of time than the 5-year ISR budget and is based on an independent analysis of expected future requirements and strategic goals. This architecture should be sufficiently detailed to inform a comprehensive assessment and prioritization of capability gaps and overlaps, to allow decision makers to evaluate tradeoffs between competing needs, and to assess progress in addressing capability gaps and overlaps in order to achieve ISR strategic goals. Direct the Chairman of the Joint Chiefs of Staff and the Under Secretary of Defense for Intelligence to collaborate, with one of these organizations assigned as the lead, in developing a comprehensive source of information, which augments the ISR Integration Roadmap, on all existing and developmental ISR capabilities throughout the ISR enterprise for sponsors to use in conducting capabilities-based assessments and for the Battlespace Awareness Functional Capabilities Board to use in evaluating them. Direct the Chairman of the Joint Chiefs of Staff to develop a supervisory review or other monitoring mechanism to ensure that (1) the Battlespace Awareness Functional Capabilities Board and the sponsors engage in early coordination to facilitate sponsors’ consideration of existing and developmental ISR capabilities in developing their capabilities-based assessments, (2) capabilities-based assessments are completed, and (3) the Battlespace Awareness Functional Capabilities Board uses systematic procedures for reviewing the assessments. Direct the Chairman of the Joint Chiefs of Staff to (1) review the Battlespace Awareness Functional Capabilities Board’s staffing levels and expertise and workload to engage in early coordination with sponsors and review capabilities-based assessments, and (2) if shortfalls are identified, develop a plan that addresses any identified shortfalls of personnel, resources, or training, assigns responsibility for actions, and establishes time frames for implementing the plan. We provided a draft of this report to DOD and the Office of the Director of National Intelligence. DOD provided written comments, in which it agreed or partially agreed with three recommendations and disagreed with one recommendation. DOD’s comments are reprinted in their entirety in appendix II. In addition, both DOD and the Office of the Director of National Intelligence provided technical comments, which we have incorporated into the report as appropriate. DOD agreed with our recommendation to develop a vision of a future ISR architecture that addresses a longer period of time than the 5-year ISR budget and is based on an independent analysis of expected future requirements and strategic goals. The department stated that work is underway to develop a future ISR architecture, including a plan of action and milestones. DOD partially agreed with our recommendation to develop a comprehensive source of information on existing and developmental ISR capabilities. In its written comments, DOD agreed that such a source of information is needed to augment the ISR Integration Roadmap. However, DOD stated that the task of developing this comprehensive source of information to facilitate the identification of all capabilities throughout the ISR enterprise should be assigned to the Under Secretary of Defense for Intelligence, as the Battlespace Awareness Capability Portfolio Manager, rather than the Joint Staff as we recommended. We originally recommended that this task be directed to the Joint Staff because the need for such a comprehensive source of information was most evident in the difficulties in developing and reviewing ISR capability proposals as called for under the JCIDS review process, which is managed by the Joint Staff. We agree with DOD that the Under Secretary of Defense for Intelligence, who is responsible for both developing the ISR Integration Roadmap and leading the Battlespace Awareness capability portfolio management effort, is a key player in efforts to improve integration of future joint ISR capabilities and could be logically assigned leadership responsibilities for this task. We have modified this recommendation in the final report to clarify that the Secretary of Defense could assign leadership to either organization, in consultation with the other, to develop the comprehensive source of information that sponsors and the BA FCB need. In the draft report, we had included in this recommendation two actions that the Joint Staff could take to improve the process for identifying future ISR capabilities. In modifying this recommendation to reflect DOD’s comment that the Under Secretary of Defense for Intelligence could have the lead role in developing the information source, we moved these two actions to our third recommendation, thereby consolidating actions that the Joint Staff needs to take into one recommendation that considers key responsibilities within the JCIDS process. DOD partially agreed with our recommendation related to the need to ensure that (1) the Battlespace Awareness Functional Capabilities Board and the sponsors engage in early coordination to facilitate sponsors’ consideration of existing and developmental ISR capabilities in developing their capabilities-based assessments, (2) capabilities-based assessments are completed, and (3) the Battlespace Awareness Functional Capabilities Board uses systematic procedures for reviewing the assessments. In its written comments, DOD agreed that all three elements of this recommendation are needed but stated that changes in guidance were not needed. Our recommendation did not specifically call for additional guidance but was focused on the need to execute existing guidance. For example, as the report describes, Joint Staff policy calls for the sponsors and Functional Capabilities Board to work together during the analysis process, but the sponsors of the proposals we reviewed and the BA FCB did not consistently engage in this coordination. In addition, although Joint Staff policy gives the BA FCB responsibility for providing oversight of potential solutions to achieve optimum effectiveness and efficiency in ISR capability development, we found that the BA FCB did not systematically review capabilities-based assessments as a means of providing such oversight. In response to DOD’s comments, we modified this recommendation to clarify that DOD should ensure compliance with its existing guidance by developing a monitoring mechanism that would ensure that early coordination takes place and that capabilities-based assessments are completed and reviewed. In its comments, the department also stated that our report is misleading because we evaluated some programs initiated prior to the genesis of JCIDS. As our report describes, the scope of our review included 19 ISR capability proposals that were introduced only after the implementation of JCIDS in 2003. We noted that some of these proposals used analysis conducted prior to the implementation of JCIDS as a substitute for the capabilities-based assessment that is required by the JCIDS process. However, we were unable to apply JCIDS criteria to evaluate them because these proposals did not have capabilities-based assessments. In addition, our recommendation to ensure that capabilities-based assessments are completed was based on our observations of all 19 ISR capability proposals, including not only the 12 proposals that lacked capabilities- based assessments but also the 7 proposals whose assessments varied in rigor and completeness. DOD disagreed with our recommendation that the department (1) review the BA FCB’s staffing levels and expertise and workload to engage in early coordination with sponsors and review capabilities-based assessments, and (2) if shortfalls of personnel, resources, or training needed are identified, develop a plan to address them, including assigning responsibility for actions and establishing time frames for implementing the plan. In its written comments, the department stated that Joint Staff policy clearly defines the roles and responsibilities of the sponsors and Functional Capabilities Boards. We agree that Joint Staff policy defines roles and responsibilities of these groups, and we note that this policy assigns responsibility to both the sponsors and the Functional Capabilities Board to coordinate with each other. We did not recommend that further policy direction was needed, as DOD stated in its comments. DOD also noted that it had conducted a review of Functional Capabilities Board personnel and resources in fiscal year 2007, which did not identify deficiencies. However, workload issues and lack of technical skills among staff were mentioned to us by defense officials as reasons why early coordination and reviews were not being systematically performed as part of the BA FCB’s oversight function—a key function called for in Joint Staff policy. Therefore, in light of our finding that the BA FCB did not fully implement these key oversight activities, we continue to believe that the department should reconsider whether the BA FCB has the appropriate number of staff with the appropriate skills to fully implement these oversight activities. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its date. At that time, we will send copies of this report to interested congressional committees; the Secretary of Defense; the Under Secretary of Defense for Intelligence; the Chairman of the Joint Chiefs of Staff; the Secretaries of the Army, Navy, and Air Force; the Commandant of the Marine Corps; the Office of the Director of National Intelligence; and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, this report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5431 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To describe the challenges, if any, that the Department of Defense (DOD) faces in working to achieve an integrated ISR enterprise, we reviewed documents on the operation of DOD’s ISR enterprise and the national intelligence community and discussed the ISR enterprise and its complexities with a variety of defense-related intelligence organizations, as well as with the Office of the Director of National Intelligence. Specifically, we discussed coordination challenges faced by components of DOD’s ISR enterprise with officials from the Office of the Under Secretary of Defense for Intelligence, Arlington, VA; the Joint Staff, Arlington, Va.; the National Security Space Office, Fairfax, Va.; U.S. Strategic Command’s Joint Functional Component Command for ISR, Washington, D.C.; the Defense Intelligence Agency, Washington, D.C.; the National Geospatial-Intelligence Agency, Reston, Va.; and the National Security Agency, Annapolis Junction, Md.; and the Office of the Director of National Intelligence, Washington, D.C. To assess DOD’s management approach for improving integration of future ISR investments, we reviewed DOD’s ISR Integration Roadmap and other ISR integration efforts within DOD. We compared DOD’s ISR Integration Roadmap to key elements of an enterprise architecture to determine whether the Roadmap, in whole or in part, met these key elements. We identified these key elements by reviewing DOD and federal guidance on enterprise architecture best practices, specifically the Department of Defense Architecture Framework and the Chief Information Officer Council’s Practical Guide to Federal Enterprise Architecture. In addition, we reviewed the implementation of the Battlespace Awareness capability portfolio management test case led by the Office of the Under Secretary of Defense for Intelligence. We compared these efforts to portfolio management best practices we identified by reviewing our past work on this subject. We also obtained information from and discussed DOD’s ISR Integration Roadmap and DOD ISR integration efforts and challenges with senior officials from the Office of the Secretary of Defense, Arlington, Va.; the Joint Staff, Arlington, Va.; the Office of the Under Secretary of Defense for Intelligence, Arlington, Va.; the Office of the Assistant Secretary of Defense for Networks and Information Integration, Arlington, Va.; the National Security Space Office, Fairfax, Va.; U.S. Strategic Command’s Joint Functional Component Command for ISR, Washington, D.C.; the Defense Intelligence Agency, Washington, D.C.; and the Office of the Director of National Intelligence, Washington, D.C. To evaluate the extent to which DOD has implemented key activities within the Joint Capabilities Integration and Development System (JCIDS) to ensure that proposed new ISR capabilities fill gaps, are not duplicative, and use a joint approach to filling warfighters’ needs based on a thorough analysis of existing capabilities, we identified 19 ISR capability proposals, described in table 1, that were submitted to the Joint Staff since the implementation of JCIDS in 2003 and for which the Battlespace Awareness Functional Capabilities Board was designated the lead Functional Capabilities Board. In total, there were 20 ISR capability proposals that met these criteria; however, 1 of the 20 proposals, along with its underlying capabilities-based assessment, was highly classified and, since we did not have the appropriate security clearances, we did not review this proposal. For the remaining 19 ISR capability proposals, we evaluated the extent to which they were generated and validated in accordance with Joint Staff policies and procedures. Specifically, for each of the 19 capability proposals, we obtained capabilities-based assessments or other JCIDS analysis documents that were produced by sponsors of these ISR capability proposals, and we performed a dependent document review of the 7 ISR capability proposals that included a capabilities-based assessment, using a data collection instrument based on applicable versions of the Chairman of the Joint Chiefs of Staff Instruction 3170.01, Joint Capabilities Integration and Development System. In conducting this document review, we considered whether these JCIDS analysis documents showed evidence of the following elements: (1) a full review conducted, (2) cost information included, (3) consideration of the full range of existing and developmental stage ISR assets, (4) consideration of modifications as potential solutions, and (5) consideration of potential redundancies. The results of this analysis are shown in figure 5 of this report. Our specific methodology for this analysis is as follows: To determine whether a full review had been conducted, we determined whether a Functional Needs Analysis (FNA) and Functional Solution Analysis (FSA) existed and whether they flowed from a Functional Area Analysis (FAA) and FNA, respectively. As generally described in Joint Staff guidance, an FAA identifies the operational tasks, conditions, and standards needed to achieve military objectives. An FNA assesses the ability of current and planned systems to deliver the capabilities and tasks identified in the FAA in order to produce a list of capability gaps and identify redundancies. An FSA will identify joint approaches to fill the identified capability gaps. To determine whether cost information was included, we reviewed whether the FSA considered costs of the proposed solutions. As generally described in Joint Staff guidance, the FSA analysis must evaluate the cost to develop and procure materiel approaches compared to the cost of sustaining an existing capability. To determine whether the full range of existing and developmental- stage ISR assets was considered, we reviewed whether the FSA considered interagency or foreign materiel solutions and whether the FNA or FSA considered the full range of joint solutions. We defined the full range of joint solutions as including strategic, operational, and tactical ISR assets as well as developing or recently developed ISR systems. As generally described in Joint Staff policy, the FNA assesses the entire range of doctrine, organization, training, materiel, logistics, personnel, and facilities and policy as an inherent part of defining capability needs, and the FSA assesses all potential materiel and non- materiel ways to fill capability gaps as identified by the FNA, including changes that leverage existing materiel capabilities, product improvements, and adoption of interagency or foreign materiel solutions. To determine whether modifications were considered as potential solutions, we reviewed whether the FSA considered using existing systems differently or modifying policies and processes. As generally described in Joint Staff guidance, the FSA is to identify combinations of materiel and non-materiel approaches and examine additional approaches by conducting market research to determine whether commercial or non-developmental items are available or could be modified to meet the desired capability. To determine whether potential redundancies were considered, we reviewed whether either the FNA or the FSA identified potentially redundant ISR capabilities. As generally described in Joint Staff guidance, an FNA should describe a capability overlap by comparing desired functions with current capabilities. However, we considered the capabilities-based assessment as having identified potential redundancies if such redundancies were included in either the FNA or FSA. We identified the above elements by analyzing current and superseded versions of the Joint Staff instruction on the JCIDS process—specifically, the Chairman of the Joint Chiefs of Staff Instruction 3170.01, Joint Capabilities Integration and Development System—to determine the changes over time and the criteria common to all versions. Further, we reviewed the following policies and procedures related to the validation of ISR capabilities through JCIDS: Chairman of the Joint Chiefs of Staff Instruction 5123.01, Charter of the Joint Requirements Oversight Council; Chairman of the Joint Chiefs of Staff Instruction 3137.01, The Functional Capabilities Board Process; Chairman of the Joint Chiefs of Staff Instruction 3170.01, Joint Capabilities Integration and Development System; and Chairman of the Joint Chiefs of Staff Manual 3170.01, Operation of the Joint Capabilities Integration and Development System. In order to conduct this review of JCIDS policies and procedures, we included in our scope the current and superseded versions of these guidance documents; accordingly, we reviewed all instructions and manuals relevant to DOD’s JCIDS process that were in effect at some point between the publication of the initial JCIDS instruction (Joint Chiefs of Staff Instruction 3170.01A, dated June 24, 2003) and the conclusion of our review (March 2008). In addition, we obtained insight into the procedures and challenges associated with validating proposals for new ISR capabilities through discussions with officials from the Office of the Under Secretary of Defense for Intelligence, Arlington, Va.; the Joint Staff, Arlington, Va.; the Battlespace Awareness Functional Capabilities Board, Arlington, Va.; and the sponsors of the 19 ISR capability proposals that we reviewed. The sponsors with whom we spoke were officials from the Air Force; Army; Navy; Marine Corps; U.S. Special Operations Command; U.S. Joint Forces Command; Defense Intelligence Agency; National Geospatial- Intelligence Agency; and National Security Agency. In addition to the contact named above, Margaret G. Morgan, Assistant Director; Catherine H. Brown; Gabrielle A. Carrington; Frank Cristinzio; Grace Coleman; Jay Smale; and Karen Thornton made key contributions to this report.
The Department of Defense's (DOD) intelligence, surveillance, and reconnaissance (ISR) capabilities-such as satellites and unmanned aircraft systems-are crucial to military operations, and demand for ISR capabilities has increased. For example, DOD plans to invest $28 billion over the next 7 years in 20 airborne ISR systems alone. Congress directed DOD to fully integrate its ISR capabilities, also known as the ISR enterprise, as it works to meet current and future ISR needs. GAO was asked to (1) describe the challenges, if any, that DOD faces in integrating its ISR enterprise, (2) assess DOD's management approach for improving integration of its future ISR investments, and (3) evaluate the extent to which DOD has implemented key activities to ensure proposed new ISR capabilities fill gaps, are not duplicative, and use a joint approach to meeting warfighters' needs. GAO assessed DOD's integration initiatives and 19 proposals for new ISR capabilities. We supplemented this analysis with discussions with DOD officials. DOD faces a complex and challenging environment in supporting defense requirements for ISR capabilities as well as national intelligence efforts. Past efforts to improve integration across DOD and national intelligence agencies have been hampered by the diverse missions and different institutional cultures of the many intelligence agencies that DOD supports. For example, DOD had difficulty obtaining complete information on national ISR assets that could support military operations because of security classifications of other agency documents. Further, different funding arrangements for defense and national intelligence activities complicate integration of interagency activities. While DOD develops the defense intelligence budget, some DOD activities also receive funding through the national intelligence budget to provide support for national intelligence efforts. Disagreements about equitable funding from each budget have led to program delays. Separate military and intelligence requirements identification processes also complicate efforts to integrate future ISR investments. DOD does not have a clearly defined vision of a future ISR enterprise to guide its ISR investments. DOD has taken a significant step toward integrating its ISR activities by developing an ISR Integration Roadmap that includes existing and currently planned ISR systems. However, the Roadmap does not provide a long-term view of what capabilities are required to achieve strategic goals or provide detailed information that would make it useful as a basis for deciding among alternative investments. Without a clear vision of the desired ISR end state and sufficient detail on existing and planned systems, DOD decision makers lack a basis for determining where additional capabilities are required, prioritizing investments, or assessing progress in achieving strategic goals, as well as identifying areas where further investment may not be warranted. DOD policy calls for the services and agencies that sponsor proposals for new ISR capabilities to conduct comprehensive assessments of current and planned ISR systems, but GAO's review of 19 proposals showed that 12 sponsors did not complete assessments, and the completeness of the remaining 7 sponsors' assessments varied. GAO found that the DOD board charged with reviewing ISR proposals did not consistently coordinate with sponsors to ensure the quality of the assessments supporting their proposals or review the completed assessments. There were three key reasons for this. First, the board did not have a comprehensive, readily available source of information about existing and developmental ISR capabilities that could help identify alternatives to new systems. Second, the board has no monitoring mechanism to ensure that key activities are fully implemented. Third, DOD board officials said that the board lacks adequate numbers of dedicated, skilled personnel to engage in early coordination with sponsors and to review sponsors' assessments. Without more complete information on alternatives and a monitoring mechanism to ensure these key activities are fully implemented, DOD is not in the best position to ensure that investment decisions are consistent with departmentwide priorities.
SCI refers to classified information concerning or derived from intelligence sources, methods, or analytical processes requiring exclusive handling within formal access control systems established by the Director of Central Intelligence. The Central Intelligence Agency (CIA) is responsible for adjudicating and granting all EOP requests for SCI access. According to the EOP Security Office, between January 1993 and May 1998, the CIA granted about 840 EOP employees access to SCI. Executive Order 12958, Classified National Security Information, prescribes a uniform system for classifying, safeguarding, and declassifying national security information and requires agency heads to promulgate procedures to ensure that the policies established by the order ensure that classified material is properly safeguarded, and establish and maintain a security self-inspection program of their classified activities. The order also gives the Director, Information Security Oversight Office (an organization under the National Archives and Records Administration), the authority to conduct on-site security inspections of EOP’s and other executive branch agencies’ classified programs. Office of Management and Budget Circular Number A-123, Management Accountability and Control, emphasizes the importance of having clearly documented and readily available procedures as a means to ensure that programs achieve their intended results. Director of Central Intelligence Directive 1/14, Personnel Security Standards and Procedures Governing Eligibility for Access to Sensitive Compartmented Information, lays out the governmentwide eligibility standards and procedures for access to SCI by all U.S. citizens, including government civilian and military personnel, contractors, and employees of contractors. The directive requires (1) the employing agency to determine that the individual has a need to know; (2) the cognizant Senior Official of the Intelligence Community to review the individual’s background investigation and reach a favorable suitability determination; and (3) the individual, once approved by the Senior Official of the Intelligence Community for SCI access, to sign a SCI nondisclosure agreement.Additional guidance concerning SCI eligibility is contained in Executive Order 12968, the U.S. Security Policy Board investigative standards and adjudicative guidelines implementing Executive Order 12968, and Director of Central Intelligence Directive 1/19. Governmentwide standards and procedures for safeguarding SCI material are contained in Director of Central Intelligence Directive 1/19, Security Policy for Sensitive Compartmented Information and Security Policy Manual. The EOP Security Office is part of the Office of Administration. The Director of the Office of Administration reports to the Assistant to the President for Management and Administration. The EOP Security Officer is responsible for formulating and directing the execution of security policy, reviewing and evaluating EOP security programs, and conducting security indoctrinations and debriefings for agencies of the EOP. Additionally, each of the nine EOP offices we reviewed has a security officer who is responsible for that specific office’s security program. As discussed with your office, we reviewed EOP procedures but did not verify whether the procedures were followed in granting SCI access to EOP employees, review EOP physical security practices for safeguarding classified material, conduct classified document control and accountability inspections, or perform other control tests of classified material over which the EOP has custody. (See pp. 8 and 9 for a description of our scope and methodology.) The EOP Security Officer told us that, for the period January 1993 until June 1996, (1) he could not find any EOP-wide procedures for acquiring access to SCI for the White House Office, the Office of Policy Development, the Office of the Vice President, the National Security Council, and the President’s Foreign Intelligence Advisory Board for which the former White House Security Office provided security support and (2) there were no EOP-wide procedures for acquiring access to SCI for the Office of Science and Technology Policy, the Office of the United States Trade Representative, the Office of National Drug Control Policy, and the Office of Administration for which the EOP Security Office provides security support. He added that there had been no written procedures for acquiring SCI access within the EOP since he became the EOP Security Officer in 1986. In contrast, we noted that two of the nine EOP offices we reviewed issued office-specific procedures that make reference to acquiring access to SCI—the Office of Science and Technology Policy in July 1996 and the Office of the Vice President in February 1997. According to the EOP Security Officer, draft EOP-wide written procedures for acquiring access to SCI were completed in June 1996 at the time the White House and EOP Security Offices merged. These draft procedures, entitled Security Procedures for the EOP Security Office, were not finalized until March 1998. While the procedures discuss the issuance of EOP building passes, they do not describe in detail the procedures EOP offices must follow to acquire SCI access; the roles and responsibilities of the EOP Security Office, security staffs of the individual EOP offices, and the CIA and others in the process; or the forms and essential documentation required before the CIA can adjudicate a request for SCI access. Moreover, the procedures do not address the practices that National Security Council security personnel follow to acquire SCI access for their personnel. For example, unlike the process for acquiring SCI access in the other eight EOP offices we reviewed, National Security Council security personnel (rather than the personnel in the EOP Security Office) conduct the employee pre-employment security interview; deal directly with the CIA to request SCI access; and, once the CIA approves an employee for access, conduct the SCI security indoctrination and oversee the individual’s signing of the SCI nondisclosure agreement. Director of Central Intelligence Directives 1/14 and 1/19 require that access to SCI be controlled under the strictest application of the need-to-know principle and in accordance with applicable personnel security standards and procedures. In exceptional cases, the Senior Official of the Intelligence Community or his designee (the CIA in the case of EOP employees) may, when it is in the national interest, authorize an individual access to SCI prior to completion of the individual’s security background investigation. At least since July 1996, according to the National Security Council’s security officer, his office has granted temporary SCI access to government employees and individuals from private industry and academia—before completion of the individual’s security background investigation and without notifying the CIA. He added, however, that this practice has occurred only on rare occasions to meet urgent needs. He said that this practice was also followed prior to July 1996 but that no records exist documenting the number of instances and the parties the National Security Council may have granted temporary SCI access to prior to this date. CIA officials responsible for adjudicating and granting EOP requests for SCI access told us that the CIA did not know about the National Security Council’s practice of granting temporary SCI access until our review. A senior EOP official told us that from July 1996 through July 1998, the National Security Council security officer granted 35 temporary SCI clearances. This official also added that, after recent consultations with the CIA, the National Security Council decided in August 1998 to refer temporary SCI clearance determinations to the CIA. The EOP-wide security procedures issued in March 1998 do not set forth security practices EOP offices are to follow in safeguarding classified information. In contrast, the Office of Science and Technology Policy and the Office of the Vice President had issued office-specific security procedures that deal with safeguarding SCI material. The Office of Science and Technology Policy procedures, issued in July 1996, were very comprehensive. They require that new employees be thoroughly briefed on their security responsibilities, advise staff on their responsibilities for implementing the security aspects of Executive Order 12958, and provide staff specific guidance on document accountability and other safeguard practices involving classified information. The remaining seven EOP offices that did not have office-specific procedures for safeguarding SCI and other classified information stated that they rely on Director of Central Intelligence Directive 1/19 for direction on such matters. Executive Order 12958 requires the head of agencies that handle classified information to establish and maintain a security self-inspection program. The order contains guidelines (which agency security personnel may use in conducting such inspections) on reviewing relevant security directives and classified material access and control records and procedures, monitoring agency adherence to established safeguard standards, assessing compliance with controls for access to classified information, verifying whether agency special access programs provide for the conduct of internal oversight, and assessing whether controls to prevent unauthorized access to classified information are effective. Neither the EOP Security Office nor the security staff of the nine EOP offices we reviewed have conducted security self-inspections as described in the order. EOP officials pointed out that security personnel routinely conduct daily desk, safe, and other security checks to ensure that SCI and other classified information is properly safeguarded. These same officials also emphasized the importance and security value in having within each EOP office experienced security staff responsible for safeguarding classified information. While these EOP security practices are important, the security self-inspection program as described in Executive Order 12958 provides for a review of security procedures and an assessment of security controls beyond EOP daily security practices. Executive Order 12958 gives the Director, Information Security Oversight Office, authority to conduct on-site reviews of each agency’s classified programs. The Director of the Information Security Oversight Office said his office has never conducted an on-site security inspection of EOP classified programs. He cited a lack of sufficient personnel as the reason for not doing so and added that primary responsibility for oversight should rest internally with the EOP and other government agencies having custody of classified material. The Director’s concern with having adequate inspection staff and his view on the primacy of internal oversight do not diminish the need for an objective and systematic examination of EOP classified programs by an independent party. An independent assessment of EOP security practices by the Information Security Oversight Office could have brought to light the security concerns raised in this report. To improve EOP security practices, we recommend that the Assistant to the President for Management and Administration direct the EOP Security Officer to revise the March 1998 Security Procedures for the EOP Security Office to include comprehensive guidance on the procedures EOP offices must follow in (1) acquiring SCI access for its employees and (2) safeguarding SCI material and establish and maintain a self-inspection program of EOP classified programs, including SCI in accordance with provisions in Executive Order 12958. We recommend further that, to properly provide for external oversight, the Director, Information Security Oversight Office, develop and implement a plan for conducting periodic on-site security inspections of EOP classified programs. We provided the EOP, the Information Security Oversight Office, and the CIA a copy of the draft report for their review and comment. The EOP and the Information Security Oversight Office provided written comments, which are reprinted in their entirety as appendixes I and II, respectively. The CIA did not provide comments. In responding for the EOP, the Assistant to the President for Management and Administration stated that our report creates a false impression that the security procedures the EOP employs are lax and inconsistent with established standards. This official added that the procedures for regulating personnel access to classified information are Executive Order 12968 and applicable Security Policy Board guidelines and Executive Order 12968 and Executive Order 12958 for safeguarding such information. The Assistant to the President also stated that the report suggests that the EOP operated in a vacuum because the EOP written security procedures implementing Executive Order 12968 were not issued until March 1998. The official noted that EOP carefully followed the President’s executive orders, Security Policy Board guidelines and applicable Director of Central Intelligence Directives during this time period. While the EOP disagreed with the basis for our recommendation, the Assistant to the President stated that EOP plans to supplement its security procedures with additional guidance. We agree that the executive orders, Security Policy Board guidelines, and applicable Director of Central Intelligence Directives clearly lay out governmentwide standards and procedures for access to and safeguarding of SCI. However, they are not a substitute for local operating procedures that provide agency personnel guidance on how to implement the governmentwide procedures. We believe that EOP’s plan to issue supplemental guidance could strengthen existing procedures. The Assistant to the President also stated that it is not accurate to say that the EOP has not conducted security self-inspections. This official stated that our draft report acknowledges that “security personnel conduct daily desk, safe, and other security checks to ensure that SCI and other classified material is properly safeguarded.” The Assistant to the President is correct to point out the importance of daily physical security checks as an effective means to help ensure that classified material is properly safeguarded. However, such self-inspection practices are not meant to substitute for a security self-inspection program as described in Executive Order 12958. Self-inspections as discussed in the order are much broader in scope than routine daily safe checks. The order’s guidelines discuss reviewing relevant security directives and classified material access and control records and procedures, monitoring agency adherence to established safeguard standards, assessing compliance with controls for access to classified information, verifying whether agency special access programs (such as SCI) provide for the conduct of internal oversight, and assessing whether controls to prevent unauthorized access to classified information are effective. Our report recommends that the EOP establish a self-inspection program. In commenting on our recommendation, the Assistant to the President said that to enhance EOP security practices, the skilled assistance of the EOP Security Office staff are being made available to all EOP organizations to coordinate and assist where appropriate in agency efforts to enhance self-inspection. We believe EOP security practices would be enhanced if this action were part of a security self-inspection program as described in Executive Order 12958. The Director, Information Security Oversight Office noted that our report addresses important elements of the SCI program in place within the EOP and provides helpful insights for the security community as a whole. The Director believes that we overemphasize the need to create EOP specific procedures for handling SCI programs. He observed that the Director of Central Intelligence has issued governmentwide procedures on these matters and that for the EOP to prepare local procedures would result in unnecessary additional rules and expenditure of resources and could result in local procedures contrary to Director of Central Intelligence Directives. As we discussed above, we agree that the executive orders, Security Policy Board guidelines, and applicable Director of Central Intelligence Directives clearly lay out governmentwide standards and procedures for access to and safeguarding of SCI. However, they are not a substitute for local operating procedures that provide agency personnel guidance on how to implement the governmentwide procedures. The Director agreed that his office needs to conduct on-site security inspections and hopes to begin the inspections during fiscal year 1999. The Director also noted that the primary focus of the inspections would be classification management and not inspections of the SCI program. To identify EOP procedures for acquiring access to SCI and safeguarding such information, we met with EOP officials responsible for security program management and discussed their programs. We obtained and reviewed pertinent documents concerning EOP procedures for acquiring SCI access and safeguarding such information. In addition, we obtained and reviewed various executive orders, Director of Central Intelligence Directives, and other documents pertaining to acquiring access to and safeguarding SCI material. We also discussed U.S. government security policies pertinent to our review with officials of the Information Security Oversight Office and the U.S. Security Policy Board. Additionally, we met with officials of the CIA responsible for adjudicating and granting EOP employees SCI access and discussed the CIA procedures for determining whether an individual meets Director of Central Intelligence Directive eligibility standards. As discussed with your office, we did not verify whether proper procedures were followed in granting SCI access to the approximately 840 EOP employees identified by the EOP Security Officer. Also, we did not review EOP physical security practices for safeguarding SCI and other classified material, conduct classified document control and accountability inspections, or perform other control tests of SCI material over which the EOP has custody. We performed our review from January 1998 until August 1998 in accordance with generally accepted government auditing standards. At your request, we plan no further distribution of this report until 30 days after its issue date. At that time, we will provide copies to appropriate congressional committees; the Chief of Staff to the President; the Assistant to the President for Management and Administration; the Director, Information Security Oversight Office; the Director of Central Intelligence; Central Intelligence Agency; the U.S. Security Policy Board; the Director of the Office of Management and Budget; and other interested parties. Please contact me at (202) 512-3504 if you or your staff have any questions concerning this report. Major contributors to this report were Gary K. Weeter, Assistant Director and Tim F. Stone, Evaluator-in-Charge. The following is GAO’s comment to the Assistant to the President for Management and Administration’s letter dated September 23, 1998. 1. A representative of the Executive Office of the President (EOP) told us that the errors referred, for example, to statements in ours draft report that the EOP does not conduct self-inspections and that the EOP lacks written procedures. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed whether the Executive Office of the President (EOP) has established procedures for: (1) acquiring personnel access to classified intelligence information, specifically sensitive compartmented information (SCI); and (2) safeguarding such information. GAO noted that: (1) the EOP Security Officer told GAO that, for the period January 1993 until June 1996: (a) he could not find any EOP-wide procedures for acquiring access to SCI for the White House Office, the Office of Policy Development, the Office of the Vice President, the National Security Council, and the President's Foreign Intelligence Advisory Board for which the former White House Security Office provided security support; and (b) there were no EOP-wide procedures for acquiring access to SCI for the Office of Science and Technology Policy, the Office of the United States Trade Representative, the Office of National Drug Control Policy, and the Office of Administration for which the EOP security office provides security support; (2) the EOP-wide security procedures issued in March 1998 do not set forth security practices EOP offices are to follow in safeguarding classified information; (3) in contrast, the Office of Science and Technology Policy and the Office of the Vice President had issued office-specific security procedures that deal with safeguarding SCI material; (4) the remaining seven EOP offices that did not have office-specific procedures for safeguarding SCI and other classified information stated that they rely on Director of Central Intelligence Directive 1/19 for direction on such matters; (5) neither the EOP Security Office nor the security staff of the nine EOP offices GAO reviewed have conducted security self-inspections as described in Executive Order 12958; (6) EOP officials pointed out that security personnel routinely conduct daily desk, safe, and other security checks to ensure that SCI and other classified information is properly safeguarded; (7) these same officials also emphasized the importance and security value in having within each EOP office experienced security staff responsible for safeguarding classified information; (8) Executive Order 12958 gives the Director, Information Security Oversight Office, authority to conduct on-site reviews of each agency's classified programs; and (9) the Director of the Information Security Oversight Office said his office has never conducted an on-site security inspection of EOP classified programs.
The National Defense Authorization Act (NDAA) for Fiscal Year 2010 requires that DOD develop and maintain the FIAR Plan. The FIAR Plan must include specific actions to take and the costs associated with correcting the financial management deficiencies that impair DOD’s ability to prepare reliable and timely financial management information and ensure that its financial statements are validated as ready for audit by September 30, 2017. Further, the NDAA for Fiscal Year 2014 mandates an audit of DOD’s fiscal year 2018 department-wide financial statements and submission of those results to Congress by March 31, 2019. Since DOD management relies heavily on budget information for day-to-day management decisions, the DOD Comptroller designated the SBR as an audit priority. In response to difficulties encountered in preparing for an SBR audit, DOD reduced the scope of initial SBR audits beginning in fiscal year 2015 to focus on current-year budget activity reported on a Schedule of Budgetary Activity. This is an interim step toward achieving an audit of multiple-year budget activity required for an audit of the SBR. In the last quarter of fiscal year 2014, each military service, including the Navy, asserted audit readiness on its General Fund Schedule of Budgetary Activity. The fiscal year 2015 Schedule of Budgetary Activity reflects the balances and associated activity related only to budgetary authority on or after October 1, 2014. In December 2014, DOD contracted with an IPA, for the audit of the Navy’s fiscal year 2015 Schedule of Budgetary Activity. In February 2016, the IPA issued a disclaimer of opinion on the Navy’s Schedule of Budgetary Activity and identified pervasive control deficiencies in the Navy’s decentralized financial management and information technology environment. Navy management concurred with the findings in the IPA’s report and stated that it will develop and execute corrective actions to address the IPA’s recommendations, including those related to FBWT. In May 2010, the DOD Comptroller issued the FIAR Guidance to provide a standard methodology for DOD components to follow in developing an audit strategy and implementing FIPs. The FIAR Guidance defines DOD’s strategy, goals, roles and responsibilities, and procedures for the components to become audit ready. Specifically, the guidance provides a standard methodology that DOD components are required to follow in developing and implementing FIPs. These plans, in turn, provide a framework for planning, performing, documenting, and monitoring efforts to achieve auditability. To manage their improvement efforts, components may develop multiple FIPs, including plans related to specific assessable units, which can be information systems supporting financial statement line items or other discrete portions of the program. The FIAR Guidance describes the five audit readiness phases and activities that DOD reporting entities (including the Navy) should include in their FIPs. The five audit readiness phases are Discovery, Corrective Action, Assertion/Evaluation, Validation, and Audit. Each phase includes multiple tasks and activities that reporting entities should complete and the corresponding required deliverables. (App. II identifies the detailed FIAR tasks and required deliverables for each of these phases.) Most of the audit readiness process occurs in the Discovery and Corrective Action phases. In the Discovery phase, entities document their processes and identify, test, and assess their controls and evaluate and confirm the existence of documentation supporting relevant financial statement assertions. In the Corrective Action phase, entities develop and execute plans to address identified deficiencies and verify implementation of corrective actions. In the last three phases, a reporting entity (1) asserts its audit readiness, (2) has its assertion independently validated, and (3) employs an IPA firm to perform a financial audit. The Navy asserted audit readiness but did not complete the Corrective Action phase or validate the assertion because of time constraints and moved directly into an audit of its Schedule of Budgetary Activity by an IPA firm. The Navy’s FBWT FIP provides a framework for planning, executing, and tracking essential steps with supporting documentation to achieve audit readiness for its FBWT. Figure 1 shows important milestones and events in Navy’s FBWT FIP and overall audit readiness efforts. In April 2013, the Navy asserted that its FBWT process was audit ready. The scope of the Navy’s audit readiness assertion began with the feeder systems that provide collection and disbursement transactions and extended through posting these transactions to DFAS’s Defense Departmental Reporting System - Budgetary (DDRS-B). At the time of the Navy’s assertion in April 2013, the Navy reported that 21 of 33 key controls that it had identified for FBWT were operating effectively. The Navy further reported that it had developed corrective action plans for the remaining 12 controls and the corrective actions were under way. Subsequent to the Navy’s assertion, in May 2013, the DOD OIG initiated a review of the Navy’s FBWT assertion to determine audit readiness. Because of the significance of its findings, including that the Navy did not include in its assertion all significant systems affecting the FBWT line item, the DOD OIG did not complete its review. In August 2013, the DOD OIG provided the Navy with informal feedback, including areas for improving its FBWT audit readiness. In February 2014, in response to the DOD OIG’s feedback and to support the FBWT-related line items on its financial statements, the Navy expanded the scope of its FBWT audit readiness efforts to include all systems affecting the financial statement line items, including the Defense Departmental Reporting System - Audited Financial Statements (DDRS-AFS) system. Although the Navy expanded the scope of FBWT audit readiness efforts through financial statement compilation and reporting, it did not reassert FBWT audit readiness. In March 2014, the DOD OIG began another engagement, which involved a review of the Navy’s FBWT reconciliations. A DOD OIG official told us that this engagement did not include a review of the Navy’s FBWT assertion, but rather was to determine whether the Navy’s FBWT reconciliation was effective, supportable, and sustainable. In September 2014, DOD officials announced that all military services and many defense agencies had asserted General Fund Schedule of Budgetary Activity audit readiness. In December 2014, a contract was signed with an IPA to audit the Navy’s fiscal year 2015 General Fund Schedule of Budgetary Activity. For fiscal year 2015, the Navy’s Schedule of Budgetary Activity and FBWT were based on funding from 19 General Fund appropriation accounts. The IPA was to determine whether the Schedule of Budgetary Activity and related notes were fairly presented, in all material respects, in accordance with U.S. generally accepted accounting principles. In January 2015, after the Navy performed additional internal control testing, it reduced the total number of key internal controls for FBWT from 33 to 31. At that time, the Navy reported that 25 of the 31 controls were deemed to be operating effectively and that corrective actions were under way for the remaining 6 key controls. In April 2015, the DOD OIG issued a report on the process the Navy uses to reconcile its FBWT accounts. Reconciling FBWT activity records with the Department of the Treasury (Treasury) is similar to reconciling a checkbook to a bank statement. The Treasury Financial Manual requires agencies to reconcile their FBWT accounts to Treasury balances on a monthly basis. In its report, the DOD OIG noted several findings, including that the Navy (1) did not use general ledger data as source data for FBWT reporting, (2) had difficulty identifying the universe of transactions supporting the FBWT balance, and (3) may have used unreliable computer-processed data from two FBWT-related systems with reported significant deficiencies in internal controls. Further, in February 2016, the IPA issued a disclaimer of opinion on the Navy’s Schedule of Budgetary Activity and identified material weaknesses in internal control. One of the material weaknesses included controls over FBWT reporting and reconciliations, including the Navy’s related controls over its third-party service provider. As noted above, Navy management concurred with the findings in the IPA’s report and stated that it will develop and execute corrective actions to address the IPA’s recommendations. Going forward, DOD’s goals are to assert audit readiness for existence and completeness of its mission-critical assets by June 2016 and to assert full financial statement audit readiness by September 30, 2017. FBWT audit readiness is a step in achieving full financial statement audit readiness. Although the Navy included all the required audit tasks for the Discovery phase in developing its FBWT FIP, it did not fully implement certain required activities within these tasks in accordance with the applicable FIAR Guidance. These included activities in all four key tasks of the Discovery phase, which requires the Navy to (1) perform statement-to- process (process) analysis, (2) prioritize audit readiness efforts, (3) assess and test internal controls, and (4) evaluate supporting documentation. The purpose of these tasks is to improve financial information for Navy management and provide information to support the financial statement audit. Completion of these tasks remains important because FBWT collections and disbursements are integral to the Navy’s Schedule of Budgetary Activity, which is currently undergoing a second year audit for fiscal year 2016. Further, the FBWT line item is included on the Navy’s balance sheet, which is expected to be audited when the Navy undergoes the full financial statement audit planned for fiscal year 2018. The FIAR Guidance states that reporting entities are to perform a process analysis. We found that the Navy did not implement certain tasks for documenting the FBWT process, as required by the FIAR Guidance. The process analysis includes tracing from a summary amount, such as a line item on a financial statement, to underlying support, such as accounts in the general ledger, and support for those accounts, such as subledgers and transactions. One of the purposes of the process analysis is to provide information on the flow of data through the various systems to the financial statements. To develop the process analysis, reporting entities are to identify assessable units, business processes, systems, and other characteristics associated with amounts reported in financial statement line items. A process analysis describes the process, such as military pay, and includes a system analysis depicting asset or transaction classes, underlying processes, assessable units and subunits, and associated systems. The Navy prepared two process analyses for the FBWT assertion package, one in April 2013 and the second in February 2014, when it expanded the scope of its FBWT audit readiness efforts to include DDRS-AFS and the FBWT-related financial statement line items. The Navy’s April 2013 process analysis identified collections and disbursements as assessable units for the FBWT process and the Navy’s general ledgers as sub-assessable units. While both of these analyses identify the key systems involved in the FBWT process, neither fully documents the flow of data through the various systems to the financial statements. Although the Navy provided narratives that describe the FBWT systems, the narratives did not include certain significant events in the flow of collection and disbursement transactions from feeder systems into the financial statements. Among the events not included was the reversal of general ledger amounts and other entries. Navy officials told us that the narratives they provided in the FBWT assertion were based on the original April 2013 FBWT scope, that is, from feeder systems to DDRS- B. In February 2014, in response to DOD OIG concerns regarding the lack of agreement of FBWT financial statement amounts to the general ledger and supporting transactions, the Navy expanded the scope of the FBWT audit readiness efforts through the financial statement line item. Navy officials said that they did not apply FIAR methodology to any new items included in the expanded scope of the FBWT audit readiness efforts. Because of the limited time remaining until the audit of the fiscal year 2015 Schedule of Budgetary Activity, the Navy did not pursue an additional FBWT validation of assertion. In the Navy’s case, the process analysis is particularly important for understanding the FBWT financial reporting process because the Navy’s transactions do not follow the typical flow of data used to produce financial statements. Generally, the flow is from subsidiary ledger to general ledger to trial balance to financial statements. Without a complete FBWT process analysis and system narratives, internal controls and risks for each of the systems in the process may not be readily identified and appropriately tested. As shown in figure 2, the Navy’s FBWT financial reporting process is complex, incorporates multiple information systems, and is based on systems originally created for budgetary reporting and support of other business functions. Figure 2 and the related narrative provide an overview of the Navy’s FBWT data flow for financial reporting. A more detailed description of the Navy’s FBWT data flow is included in appendix III. The original systems include subsidiary ledgers (Program Budget Information System (PBIS) and Defense Cash Accountability System (DCAS)), a budgetary reporting system (DDRS-B), and its general ledger systems. These systems were modified over time to provide financial reports and data for financial statement compilation. Our analysis found that the Navy’s FBWT process relies on subsidiary ledgers PBIS and DCAS to distribute (allocate) the Navy’s funds to the general ledgers; distribute collection and disbursement transaction information to the forward summary data to the DFAS budgetary reporting system (DDRS-B) for inclusion in the DFAS audited financial statements system (DDRS-AFS), which ultimately creates the Navy’s consolidated financial statements. Another important activity in the process analysis is the quantitative drilldown, which provides the sources to support a summarized amount, such as a financial statement line item. The FIAR Guidance requires the preparation of a level I and level II quantitative drilldown depicting dollar activity or balances for each assessable unit. A level I quantitative drilldown provides the first level of data sources, the assessable units, that make up the summarized amount on a financial statement. A level II quantitative drilldown provides the sub-assessable units that make up the amounts in the level I quantitative drilldown. Navy officials told us that they did not prepare a level I quantitative drilldown for the Navy’s FBWT assessable units, showing how FBWT amounts are summarized for financial reporting, as they did not think this requirement was applicable for FBWT. The FIAR Directorate, in its review of the Navy’s FBWT assertion package, also determined that the quantitative drilldowns called for in the FIAR Guidance were not applicable. A FIAR Directorate official noted that the quantitative drilldown is intended to prioritize and disaggregate assessable units at the early stages of FIAR execution. A comprehensive reconciliation of the detailed transactions to the financial statements occurs later on. The FIAR Directorate official further noted that quantitative drilldowns by assessable unit were not necessary in this FIP because assessable units, such as military pay or contracts, are covered in other FIPs. Although the Navy and the FIAR Directorate said that a quantitative drilldown was not applicable, a level I quantitative drilldown for FBWT is critical for determining all the sources of transactions, including journal vouchers, comprising the population of transactions, as well as for prioritizing audit efforts. For example, system-generated entries and journal vouchers occur within DDRS-B. These journal vouchers are a source of activity affecting FBWT that according to FIAR Guidance, should be prioritized for testing. Without identification and an understanding of the entire population of transactions, including journal vouchers and other system- specific entries that a drilldown will help identify, Navy management and the auditor will not have information important for an understanding of the source of transactions, which is necessary to assess risk and determine the level of audit work necessary. Other audit readiness issues resulting from the Navy’s FBWT process are its reconciliations and transactions posted to suspense accounts. The Navy’s FBWT reconciliation process is both complex and time- consuming. The Navy has 19 general funds (appropriations) and a FBWT account for each general fund, each of which the Treasury Financial Manual requires to be reconciled monthly to Treasury accounts. The diverse nature of the numerous feeder systems provides a large volume of transactions, and the Navy’s complex FBWT process complicates the reconciliation process. The Navy’s reconciliation process is further described in appendix IV. Suspense accounts have been a long-standing problem at DOD. For example, in fiscal year 2003, Congress authorized DOD to write off long-standing debit and credit transactions reported in suspense accounts. DOD subsequently reported that it wrote off transactions with an absolute value of $35 billion. In April 2014, we reported that DOD had recorded billions of dollars of disbursement and collection transactions in suspense accounts over the years because the proper appropriation accounts could not be identified and charged, generally because of coding errors. More recently, in March 2015, the DOD OIG withdrew its opinion on the USMC fiscal year 2012 Schedule of Budgetary Activity because of suspense accounts held at Treasury that contained USMC transactions that had not been posted to valid appropriations. Appendix V provides more information on the issues and extent to which DOD and the Navy use suspense accounts. The Navy did not prioritize certain FBWT audit readiness efforts required by the FIAR Guidance to provide reasonable assurance that its audit readiness efforts were adequate. Because assessable units provide the focus for financial improvement efforts, FIAR Guidance requires the prioritization of audit readiness efforts, including ranking assessable units in order of quantitative materiality and developing qualitative factors affecting audit readiness. The FIAR Guidance also requires documenting the audit readiness strategy. However, the Navy did not prioritize its FBWT audit readiness efforts, quantitatively or qualitatively, or fully implement its audit readiness prioritization and strategy for key information systems prior to assertion. Without prioritization, the Navy cannot reasonably assure that it will first address the highest-risk areas within the FBWT process and information technology. Navy officials told us that they did not produce a prioritization and audit strategy document because they considered FBWT systems complete, as they each contained 100 percent of the transactions. DOD’s FIAR Directorate reviewed the Navy’s FBWT assertion documentation and agreed with Navy officials, indicating that the “comprehensive nature of the FBWT FIP assertion” (1) did not lend itself to prioritization within the SBR assessable unit or audit segment and (2) did not require any follow- up prior to examination. However, certain activity is unique to each system, including system-generated entries, adjustments to reconcile to Treasury, and consolidation and elimination entries. As previously noted, a level I quantitative drilldown is critical for audit readiness and would show how each FBWT system is unique and the extent of system-specific activity, and would allow each FBWT system to be prioritized for audit purposes and assessed for risk. The FIAR Guidance states that agencies should rank each assessable unit in terms of risk and in order of quantitative materiality, with largest dollar activity being the highest priority. Further, in connection with performing a financial statement audit, government auditing standards state that the auditor gains an understanding of the operating environment and assesses key controls over information systems. While the same or similar transactions flow through each of the Navy’s FBWT systems, each system also includes unique activity. For example, DDRS-B includes system-generated entries that are not in the lower-level system of DCAS. In addition, qualitative factors, such as system ownership, can affect risk to varying degrees. By assigning the same priority to all of the FBWT systems, without regard to quantitative and qualitative factors and their effects on risk, the Navy cannot reasonably assure that it initiates audit readiness efforts and corrective actions for the higher-risk systems first. For an audit readiness plan for key information technology systems, the Navy provided a schedule that identified 22 relevant systems, 16 of which the Navy deemed key FBWT systems. For these 16 systems, the Navy noted that to assess audit readiness, 11 systems would receive self- assessments, 4 systems would receive independent assessments, and for 1 system the assessment type had not yet been determined. However, the Navy indicated that only 6 of the self-assessments and all 4 of the independent assessments were completed. For the remaining 6 key FBWT systems, the Navy did not provide planned start dates or expected completion dates or indicate when it would obtain audit readiness assurance for these systems. Navy officials thought the systems inventory schedule provided with the assertion package met the FIAR requirement for prioritization of systems. However, in our view, the Navy’s schedule did not meet the FIAR Guidance requirement to prepare an assessable unit strategy document listing all assessable units prioritized by quantitative rank and adjusted for significant qualitative factors and scoping out legacy systems and processes that will not be part of the audit-ready environment. The Navy’s lack of prioritization of key information technology systems used in the FBWT process limits management’s ability to focus audit readiness efforts on the most important systems. Further, such a prioritization would also provide information to auditors on the effectiveness of controls for these systems. Further, we noted that independent reports and reviews of key FBWT systems identified serious internal control deficiencies with reporting (DDRS-B), accounting (DCAS), and budgetary systems (PBIS). Statement on Standards for Attestation Engagement No 16 reports on controls for a service organization incorporating DDRS-B included an adverse opinion for the period March 1 to November 30, 2014, and a qualified opinion for the period December 1, 2014, to July 31, 2015, due primarily to ineffective controls. The reviews of DCAS and PBIS identified significant deficiencies in internal controls. Without effective controls over key systems involved in the FBWT process, management may not have reasonable assurance that this financial statement line item is audit ready. The Navy did not fully implement certain FBWT internal control and assessment activities required by the FIAR Guidance. Specifically, the Navy did not document information technology general computer controls for significant systems or the hardware and software interfaces as required by the FIAR Guidance. In addition, as previously noted, the Navy did not sufficiently complete FIAR-required data flowcharts and system narratives. This includes an understanding of how data are processed and transferred in the various systems and how they interact with other data sources through the FBWT process to the financial statements. An important activity required in the key FIAR task of assessing and testing controls is the preparation of systems documentation to include or describe system narratives and flowcharts; risk assessments and internal control worksheets documenting its financial statement assertion risks; control activities and information technology general computer controls for significant systems, applications, or microapplications; system certifications or accreditations; system, end user, and systems documentation locations; and hardware, software, and interfaces. While the Navy prepared system narratives, flowcharts, financial reporting objectives, and control activities and included them in the FBWT assertion package, it did not prepare documentation of general computer controls for significant systems; system certifications or accreditations; system, end user, and systems documentation locations; or a description of hardware, software, and interfaces as required by the FIAR Guidance. Also, the system narratives and flowcharts the Navy provided did not sufficiently disclose the flow of data. This includes the Navy’s collection and disbursement activity through the financial statement line items, including FBWT on the balance sheet and outlays on the SBR. For example, as previously noted, the narratives did not include discussion of the reversal of general ledger transactions or other entries within DDRS-B. Navy officials told us that some of the missing systems documentation items might have been included in another audit segment or assertion package. However, the Navy did not provide evidence to support that claim, and no reference to another assertion package was made in the FBWT assertion package. According to the FIAR Guidance, documentation of performance of the required procedures for each FIP task must be completed and included in each applicable assertion package. Once the scope of the FBWT FIP was expanded to include all systems through the financial statement line item, FBWT audit readiness officials did not ensure that all required audit readiness procedures within the expanded scope were performed and the documentation supporting the procedures was available for auditors. Complete and accurate system narratives and flowcharts, and documentation of general computer controls, help to provide management and the auditor with information on the systems environment and data flow, which they use to prioritize audit efforts. Further, in preparing its April 2013 internal control assessment, the Navy identified key internal controls in the FBWT process, but it did not identify those controls by assessable unit as required by FIAR Guidance. The FIAR Guidance for this task requires (1) preparing an internal control assessment document for entity-level controls and for each assessable unit and (2) summarizing control activities that are appropriately designed and in place. Navy officials told us that several years ago they assembled a matrix of controls to be assessed. They organized the controls by the FBWT area that the controls supported or by control owner, and they thought that this met the FIAR Guidance requirement. Some of the internal controls the Navy identified and tested may be related to an assessable unit. However, the Navy did not identify controls for each assessable unit. Identifying controls by assessable unit is important for determining whether assessable units, sub-assessable units, and associated systems are producing reliable information and helps link systems and controls to the transaction flows. As a result, the Navy is missing an opportunity to identify and correct control deficiencies for the key systems that could affect its FBWT audit readiness. For example, DCAS is the primary subledger used to process the universe of collection and disbursement transactions for FBWT. Although the Navy did identify some controls involving DCAS, it did not identify internal controls by system or assessable unit. Therefore, the Navy does not have assurance that DCAS is operating as intended and that output from the system is reliable. The Navy’s substantive testing for key supporting documents may not provide sufficient evidence that its efforts to produce supporting documentation are sustainable for future audits. FIAR-required activities for this task include preparation of the transaction population, reviews of unusual or invalid transactions, and identification of key supporting documents. The FIAR Guidance also requires developing a test plan, selecting random samples from the population of transactions, and testing individual transactions and balances to confirm the existence and evaluate the quality of supporting documentation for relevant financial statement assertions. An evaluation of key supporting documentation is important for determining whether the Navy would be able to support amounts presented in the financial statements and provide an external auditor with sufficient and appropriate evidence to perform the audit. In the first round of substantive testing, the Navy identified significant deficiencies that resulted in the test failing. The Navy then performed another round of substantive testing, which, although it passed, may not provide sufficient evidence of the Navy’s ability to produce needed documentation in a sustained manner for future audits. In the second round of testing, the Navy completed 13 procedures, 3 of which involved statistical sampling while the other 10 relied on analytic or non-random sampling procedures. In both rounds of testing, documentation was limited to a 3-month period, which, even if successful, may not provide sufficient evidence of the consistent availability of supporting documentation for a 12-month period, or the ability to timely produce needed documentation over a sustained period. Because the Navy performed two rounds of substantive tests, Navy officials considered this FIAR task implemented. Further, Navy officials told us that after they asserted audit readiness, they anticipated that they would be under audit by an IPA soon thereafter, so time constraints did not permit further testing. However, lack of supporting documentation has historically been an issue on DOD audits. This was also the case in the audit of the Navy’s fiscal year 2015 Schedule of Budgetary Activity, in which the IPA disclaimed an opinion, in part, because the Navy could not provide sufficient, appropriate audit evidence to support transactions. Without performing adequate substantive testing, the Navy does not have reasonable assurance of the availability of key documentation to support amounts presented in the financial statements. Further, none of the procedures tested the supporting documentation for supplemental quarterly reconciliations. Supplemental quarterly reconciliations provide a secondary check on the accuracy of monthly reconciliations and on other monitoring procedures. Both monthly and quarterly reconciliations are key internal controls for FBWT and testing for these reconciliations provide reasonable assurance that supporting documentation is maintained and available for financial statement audits. In addition, the Navy did not perform reviews to identify unusual, invalid, or missing data as required by the FIAR Guidance. Specifically, the FIAR Guidance requires such reviews on the universe of transactions to identify and address (1) unusual or invalid transactions and (2) abnormal balances or missing data fields. Navy officials stated that these FIAR Guidance tasks were not performed because they thought these tasks would be performed in another FIP. However, we were not provided with evidence that such tasks were included in another FIP and no reference to another FIP was made in the FBWT assertion package. FIAR Guidance requires that documentation of performance of the required procedures for each FIP task be completed and included in each applicable assertion package. Without this testing, there is increased risk that errors may not be detected. The Navy has made progress in performing its key audit readiness activities, including the development of its FBWT FIP to help guide implementation of its General Fund SBR improvement efforts. However, the Navy did not fully complete certain tasks in accordance with the FIAR Guidance prior to asserting audit readiness for FBWT, a significant account for the Navy’s as well as DOD’s department-wide SBR auditability. FIAR Guidance Discovery phase tasks that the Navy did not fully complete include the FBWT process analysis, system narratives, quantitative and qualitative drilldowns, prioritization of audit readiness efforts, and documentation of general computer controls. In addition, although the Navy performed substantive tests for supporting documentation, such testing may not provide sufficient evidence of the Navy’s ability to produce needed documentation in a sustained manner for future audits. For the most part, the Navy did not complete these tasks because Navy officials believed that their efforts had satisfied the FIAR Guidance requirement, that certain tasks did not apply to the FBWT effort, or that time constraints prevented completion of the tasks. However, it is critical that FBWT tasks are adequately evaluated and documented. Although required audit readiness procedures for FBWT were not fully completed, the Navy decided to go forward with an audit of its Schedule of Budgetary Activity. The IPA’s fiscal year 2015 audit resulted in a disclaimer of opinion and the reporting of material weaknesses in internal control and related recommendations, including several recommendations pertaining to FBWT. Recommendations made in this report are in addition to the recommendations made in the IPA’s audit report. Successful completion of the FIAR Discovery phase tasks for FBWT may identify additional deficiencies that affect the auditability of the Navy’s financial statements. By not fully identifying and remediating its deficiencies specific to the FBWT effort, the Navy’s ability to achieve audit readiness and remediate internal control weaknesses is hindered. Resolution of these deficiencies is crucial to the Navy’s and DOD’s efforts to meet the September 30, 2017, statutory target date for validating audit readiness of DOD’s full financial statements. To improve the Navy’s implementation of the FIAR Guidance for its General Fund FBWT FIP and facilitate efforts to achieve SBR auditability, we recommend that the Secretary of the Navy direct the Assistant Secretary of the Navy, Financial Management and Comptroller, to take the following seven actions in the Discovery phase. Update FBWT data flowcharts and narratives to fully describe the flow of data from the Navy’s receipt of collection and disbursement transaction information through the financial statement line items, including the reversal of general ledger trial balance data generated by the automated system and other entries made within DDRS-B. Prepare a level I quantitative drilldown in accordance with the FIAR Guidance. To prioritize audit readiness efforts for the key FBWT systems, prepare an audit strategy that identifies for each system (1) the Navy’s plan for assessing the system to gain assurance that the system can be relied on; (2) the assessment types, including prioritizing the assessments based on qualitative and quantitative factors for each system; and (3) planned start and completion dates of these assessments for each system. Prepare, in accordance with FIAR Guidance, the documentation of control activities and information technology general computer controls for significant systems; system certifications or accreditations; system, end user, and systems documentation locations; and hardware, software, and interfaces. Prepare an internal control assessment document for each assessable unit, summarizing control activities that are appropriately designed and in place. Perform sufficient testing for supporting documentation to reasonably determine whether such documentation, including that for key reconciliations, is available in a sustainable manner for future audit efforts. For each fiscal year expected to be under audit, identify and address unusual and invalid transactions, abnormal balances, and missing data fields in the universe of collection and disbursement transactions. We provided a draft of this report to the Navy for review and comment. In its written comments, reprinted in appendix VI, the Navy concurred with our seven recommendations. In response to our recommendations, the Navy stated that it has actions planned, taken, or under way to (1) develop procedures and documentation that describe the processes associated with the flow of data; (2) prepare a quantitative drilldown; (3) prioritize audit readiness efforts for key FBWT systems; (4) document control activities, information technology general computer controls for significant systems, systems documentation locations, and hardware, software, and interfaces; (5) prepare an internal control assessment document; (6) test effectiveness of FBWT controls, which includes assessing the availability of supporting documentation; and (7) obtain monthly data from DFAS on invalid FBWT transactions. The Navy also provided technical comments, which we have incorporated as appropriate. We are sending copies of this report to the Secretary of Defense; the Deputy Chief Management Officer; the Under Secretary of Defense (Comptroller/Chief Financial Officer); the Deputy Chief Financial Officer; the Director, Financial Improvement and Audit Readiness; the Secretary of the Navy; the Assistant Secretary of the Navy; the Chief Management Officer of the Navy; the Directors of the Defense Finance and Accounting Service and Defense Finance and Accounting Service, Cleveland; the Director of the Office of Management and Budget; and interested congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9869 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix VII. The objective of our review was to determine the extent to which the Navy developed and implemented the Discovery phase for its General Funds’ Fund Balance with Treasury (FBWT) financial improvement plan (FIP) in accordance with the Financial Improvement and Audit Readiness (FIAR) Guidance. This objective was applied to FBWT for three of the Navy’s general ledgers—the Standard Accounting and Reporting System – Field Level, the Standard Accounting and Reporting System – Headquarter Claimant Module, and the Navy Enterprise Resource Planning. We excluded from our review the Navy’s fourth general ledger, Navy Systems Management Activity, because of its classified activity. To address our objective, we analyzed the Navy’s FBWT FIP to determine whether it contained the applicable elements and tasks to be performed for the Discovery phase of audit readiness as required by the FIAR Guidance. We identified and reviewed the Navy’s FBWT FIP key deliverables required by the FIAR Guidance, such as system narratives and flowcharts, internal control assessments, and the Navy’s test results. We performed a site visit to the Defense Finance and Accounting Service (DFAS), Cleveland, and walked through the FBWT process, reconciliations, and related systems. We interviewed Navy, DFAS, and FIAR Directorate officials within Department of Defense’s (DOD) Office of the Under Secretary of Defense (Comptroller) to obtain explanations and clarifications on documentation we reviewed. In addition, we reviewed the results of DOD’s Office of Inspector General audits as well as independent public accountant examinations of audit readiness efforts related to the Navy’s FBWT. We conducted this performance audit from April 2014 to August 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. Table 1 presents the reporting entity methodology in the Financial Improvement and Audit Readiness Guidance, which the Navy is required to follow in implementing its Fund Balance with Treasury financial improvement plan. The Navy’s Fund Balance with Treasury (FBWT) financial reporting process incorporates multiple information systems and is based on systems originally created for budgetary reporting and support of other business functions. The original systems include transaction-processing budget and reporting systems and its general ledger systems. These systems were modified over time to also provide financial reports and data for financial statement compilation. As a result, as shown in figure 2, the flow of the Navy’s transactional data does not follow the typical flow of data from subsidiary ledger to general ledger to trial balance to financial statements. The FBWT process relies on subsidiary ledgers to distribute and record the Navy’s funds and to record collection and disbursement transaction information to the general ledgers. Data are forwarded from subsidiary ledgers to the budgetary reporting system for inclusion in the financial statement system, which ultimately creates the Navy’s consolidated financial statements. This process results in some transactions flowing to the financial statements that are not posted in any of the general ledgers. This bypass of the general ledgers for financial statement preparation represents a significant audit challenge because the general ledgers do not agree with the Navy’s financial statements for FBWT. General ledgers are typically an entity’s primary system of record, where all transactions are recorded and from which financial statements are prepared. In an audit, for supporting FBWT financial statement line item amounts, it is essential that the Navy reconcile any differences between the financial statements and the general ledgers, and between the general ledgers and underlying transactions, to assure that all transactions in the financial statements are recorded in the general ledgers. Specifically: According to the Navy, the Defense Cash Accountability System (DCAS) contains the universe of collection and disbursement transactions, except for timing difference transactions. The Navy’s FBWT process begins when collection and disbursement transaction information from the Navy’s multiple disbursement feeder systems is posted to DCAS. When DCAS edit checks identify a transaction with missing or incorrect account coding information, the transaction is not distributed to the general ledgers and remains in DCAS as an “undistributed” transaction until it can be investigated and the necessary information obtained. DCAS distributes all other transactions to one of the Navy’s general ledgers. Each month, DCAS transmits summary collection and disbursement information, including undistributed transactions, to the Defense Departmental Reporting System - Budgetary (DDRS-B). DCAS is also used for the Navy’s fund balance reconciliation process with the Department of the Treasury’s (Treasury) Central Accounting Reporting System (CARS). DDRS-B has historically been used as a budgetary reporting tool for numerous Navy commands and Navy headquarters, but it has also been adapted for financial reporting. It receives budgetary information from the Program Budgetary Information System and proprietary information from DCAS. DDRS-B also receives summary information from the general ledgers. To avoid duplication of transactions, system-generated journal entries are made within DDRS-B that are intended to reverse the general ledger transactions, which were also posted by DCAS. The general ledger is typically the system of record for an entity’s financial reporting. However, since DCAS is considered to contain the universe of the Navy’s collection and disbursement transactions, DCAS transactions—not the summary transaction information from the general ledgers—flow through DDRS-B to the financial statements. Also, within DDRS-B, journal entries (forced- balance entries) are prepared and posted to temporarily make the balances in the Navy’s FBWT equal to the balances in Treasury’s CARS, until such differences can be reconciled as required by the Treasury Financial Manual. Navy officials said that forced-balance journal entries are eventually reconciled to transaction detail and are reversed in the following period; they are not posted to the Navy’s general ledgers. As a result, the Navy’s general ledger balances do not directly agree with the Navy’s financial statements. The Defense Departmental Reporting System - Audited Financial Statements (DDRS-AFS) receives an adjusted trial balance from DDRS-B. Consolidation and elimination entries are posted in DDRS- AFS to produce the Navy’s financial statements. For the Navy, net expenditure amounts on the Schedule of Budgetary Activity and financial statements are supported by DDRS-AFS, then DDRS-B, then DCAS, and then numerous feeder systems, while the general ledgers are omitted from this drilldown process. Journal entries posted in DDRS-B and undistributed transactions in DCAS are not posted in the Navy’s general ledgers, but are reflected in financial statement FBWT line item balances. The Navy’s Fund Balance with Treasury (FBWT) reconciliation process requires investigating thousands of undistributed transactions. The Navy has 19 general funds (appropriations) and a FBWT account for each general fund, which the Treasury Financial Manual requires to be reconciled monthly to Department of the Treasury (Treasury) accounts. The diverse nature of the numerous feeder systems providing the transaction information also complicates the FBWT reconciliation process. The Navy receives transaction information from both centralized and decentralized disbursement feeder systems, including the Centralized Automated Disbursing System, Intra-Governmental Payment and Collections, Mechanization of Contract Administrative Services, and other component reporting through the Defense Finance and Accounting Service (DFAS) Disbursing Station Symbol Numbers (DSSN) by Navy Shore DSSNs and Navy Ship DSSNs. The complexity of the overall financial reporting process leads to difficulties in the FBWT reconciliation process. With multiple systems used in the process and the Navy’s unique process flow, additional reconciliations are required. Consequently, each month the Navy completes four reconciliations for each of its 19 general funds. Each quarter, the Navy prepares reconciliations for each of the 19 general funds as well as consolidating schedules and reconciliations between financial systems. As of March 2015, it was taking the Navy from 2 to 3 months from the end of each fiscal quarter to complete its FBWT reconciliations. In the first quarter of fiscal year 2015, DFAS processed, through the Defense Cash Accounting System (DCAS), an average of 1.6 million nonpayroll Navy transactions per month. Of those transactions processed, an average of about 22,000 per month required intervention by DFAS and Navy FBWT reconciliation staff for them to be posted appropriately to a general ledger account. A transaction registered in the Navy’s DCAS and not distributed to one of the Navy’s general ledgers is a variance that requires human investigation and adjustment so it can be distributed and posted in one of the Navy’s general ledgers. Two types of variances are a recurring part of the Navy’s FBWT reconciliation process: (1) forced balance entries, which are necessary to agree the Navy’s balances with Treasury’s balances until timing difference transactions can be resolved, and (2) undistributed transactions, which come from feeder systems and contain insufficient or incorrect coding. Reconciling forced balance entries and investigating undistributed transactions contribute to the labor and time required to reconcile the Navy’s FBWT. Figure 3 shows the reconciliation of Navy Treasury accounts to the Navy’s general ledger for each of the quarters for fiscal year 2014 and the variance at the end of each quarter. As noted in figure 3, at the end of fiscal year 2014, the total of net forced balance entries and net undistributed transactions was $777 million, (1.0 percent of the Navy’s total net expenditures for the year). In fiscal year 2003, Congress authorized the Department of Defense (DOD) to write off long-standing debit and credit transactions that occurred before March 31, 2001, and could not be cleared from the department’s books because DOD lacked the supporting documentation necessary to record the transactions to the correct appropriations. DOD subsequently reported that it wrote off an absolute value of $35 billion, or a net value of $629 million, of suspense account amounts and check payment differences using this authority. Congress required GAO to review and report on DOD’s use of this write- off authority. DOD reported that as of December 31, 2004, after the write-off of $35 billion, it still had more than $1.3 billion (absolute value) of suspense amounts that were not cleared for more than 60 days, and DOD acknowledged that its suspense reports were incomplete and inaccurate. Our June 2005 audit report concluded that without compliance with existing laws and enforcement of its own guidance for reconciling, reporting, and resolving amounts in suspense and check differences on a regular basis, the buildup of current balances would likely continue, the department’s appropriation accounts would likely remain unreliable, and another costly write-off process could eventually be required. In April 2014, we reported that DOD had recorded billions of dollars of disbursement and collection transactions in suspense accounts because the proper appropriation accounts could not be identified and charged, generally because of coding errors. (Table 2 shows a comparison of the Navy’s suspense account balances compared to total DOD suspense account balances based on data in our April 2014 report.) In a letter dated March 23, 2015, the DOD Office of Inspector General (DOD OIG) withdrew its opinion on the U.S. Marine Corps’ (USMC) fiscal year 2012 Schedule of Budgetary Activity because of suspense accounts held by the Department of the Treasury (Treasury) that contained USMC transactions that had not been posted to valid appropriations. Because these suspense accounts contained unrecorded transactions from all DOD components, the DOD OIG was unable to quantify the number and dollar amount of USMC transactions that resided in the accounts and whether those transactions were material to the fiscal year 2012 USMC Schedule of Budgetary Activity. In addition to the variances identified in the Navy’s reconciliation process presented in figure 3, figure 4 identifies other transactions posted to Navy general ledger suspense accounts and other fund suspense account transactions not yet identified as belonging to Navy. In addition to the undistributed transactions and forced-balance amounts shown in figure 3, the Navy has two types of suspense accounts: General ledger suspense accounts represent unmatched transactions for expenditures and collections. These transactions are distributed from the Defense Cash Accounting System and are recorded in a general ledger against valid fund accounts but lack sufficient information to match them with another data element, such as an obligation. Fund suspense accounts represent temporary holding accounts used to record unidentifiable general, revolving, special, or trust fund expenditures or collections that are not included in the Navy’s general ledgers. Fund suspense accounts can also include deposit accounts used to record money that the federal government owes to others, including state and local income taxes, security deposits, civilian pay allotments, foreign taxes, and estates of deceased service members. Balances in Navy suspense accounts totaled $191 million as of September 30, 2014, and represented only 0.3 percent of the Navy’s total net expenditures. The balances reported are net, meaning increases and decreases are added together, and do not reflect the gross amount in suspense accounts or the age of individual transactions. In addition to the contact named above, the following individuals made key contributions to this report: Francine DelVecchio, Doreen Eng, Maxine Hattery, Jason Kelly, Richard Kusman, Roger Stoltz (Assistant Director), and Chevalier Strong.
The National Defense Authorization Act for Fiscal Year 2014 mandates an audit of DOD's fiscal year 2018 department-wide financial statements. To help achieve this, the DOD Comptroller issued the FIAR Guidance to provide a standard methodology for DOD components to follow to improve financial management and achieve audit readiness, and designated the SBR as an audit priority. Full implementation of the Navy's General Fund FIP for FBWT is essential to achieving audit readiness for its General Fund SBR. The Navy asserted Statement of Budgetary Activity (SBA) audit readiness as of September 30, 2014, and in February 2016 received a disclaimer of opinion on the audit of its SBA for fiscal year 2015. GAO is mandated to audit the U.S. government's consolidated financial statements, which cover activities and balances of executive branch agencies, including DOD. GAO's objective in this report was to determine the extent to which the Navy developed and implemented the Discovery phase of its General Fund FBWT FIP in accordance with the FIAR Guidance. GAO analyzed the Navy's FBWT FIP to determine whether it contained the tasks and activities required by the FIAR Guidance for the Discovery phase. GAO also reviewed the Navy's FBWT FIP key deliverables, such as process narratives and flowcharts, internal control assessments, and test results. The Navy has made progress in performing audit readiness activities, including developing a financial improvement plan (FIP) for its Fund Balance with Treasury (FBWT). These activities are critical to the Navy's General Fund Statement of Budgetary Resources (SBR) improvement efforts. The Navy's FBWT FIP is particularly important as it addresses improvement efforts across multiple business processes, including contract and vendor payments and military and civilian payroll that provide significant input to the SBR. However, the Navy did not fully implement certain tasks in its FBWT FIP in accordance with the Department of Defense's (DOD) Financial Improvement and Audit Readiness (FIAR) Guidance. These included activities in all four key tasks of the Discovery phase, the first of the five FIAR guidance phases. In the Discovery phase, the reporting entity documents processes, prioritizes audit readiness efforts, assesses and tests controls, and evaluates supporting documentation. Document processes. The Navy did not fully document its FBWT process in system narratives and flowcharts. For example, the Navy's analysis did not explain the complex process that occurs within the Defense Departmental Reporting System - B, including merging data and deleting duplicative transactions. In the Navy's case, the process analysis is particularly important because the Navy's transactions do not follow the typical flow of data used to produce financial statements. Without a complete FBWT process analysis and system narratives, internal controls and risks for each of the systems in the process may not be readily identified and appropriately tested. Prioritize audit readiness efforts. The Navy did not prioritize FBWT audit readiness efforts or fully implement its audit readiness prioritization and strategy for key information systems prior to its assertion of audit readiness. The Navy's lack of prioritization of key information technology limits management's ability to focus audit readiness efforts on the systems with the highest risk. Assess and test internal controls. Within the FBWT assertion package, the Navy did not document information technology general computer controls for significant systems or the hardware and software interfaces, as required. Also, the Navy did not identify internal controls by assessable units (e.g., information systems supporting financial statement line items or other discrete portions of the program). Identifying controls by assessable unit is important for determining whether assessable units, sub-assessable units, and associated systems are producing reliable information and helps link systems and controls to the transaction flows. Evaluate supporting documentation. Although the Navy performed substantive tests for supporting documents, such testing may not provide sufficient evidence of the Navy's ability to produce documentation in a substantive manner for future audits. An evaluation of key supporting documentation is important for determining whether the Navy would be able to support amounts presented in the financial statements or provide an external auditor with sufficient and appropriate evidence to perform the audit. Addressing these shortfalls is critical to achieving audit readiness. GAO recommended that the Navy fully implement the FIAR Guidance for FBWT in the areas of process analysis, prioritization, internal control assessment and testing, and evaluation of supporting documentation to support audit readiness. The Navy concurred with all seven recommendations.
The Internet became widely accessible to U.S. households by the mid- 1990s. For a few years, the primary means to access the Internet was a dial-up connection, in which a standard telephone line is used to make an Internet connection. A dial-up connection offers data transmission speeds of up to 56 kilobits per second (kbps). Broadband access to the Internet became available by the late 1990s. Broadband differs from a dial-up connection in certain important ways. First, broadband connections offer a higher-speed Internet connection than dial up. For example, some broadband connections offer speeds exceeding 1 million bits per second (Mbps) both upstream (data transferred from the consumer to the Internet service provider) and downstream (data transferred from the Internet service provider to the consumer). These higher speeds enable consumers to receive information much faster and thus enable certain applications to be used and content to be accessed that might not be possible with a dial- up connection. Second, broadband provides an “always on” connection to the Internet, so users do not need to establish a connection to the Internet service provider each time they want to go online. The higher transmission speeds that broadband offers cost more than dial up, and some broadband users pay a premium to obtain very-high-speed service. Consumers can receive a broadband connection to the Internet through a variety of technologies, including, but not limited to, the following: Cable modem. Cable television companies first began providing broadband service in the late 1990s over their cable networks. When provided by a cable company, broadband service is referred to as cable modem service. Cable modem service is primarily available in residential areas. Cable modem service enables cable operators to deliver broadband service by using the same coaxial cables that deliver pictures and sound to television sets. Most cable modems are external devices that have two connections, one to the cable wall outlet and the other to a computer. Although the speed of service varies with many factors, download speeds of up to 6 Mbps are typical. Cable providers are developing even higher- speed services. DSL. Local telephone companies provide digital subscriber line (DSL) service, another form of broadband service, over their telephone networks on capacity unused by traditional voice service. To provide DSL service, telephone companies must install equipment in their facilities and install or provide DSL modems and other equipment at customers’ premises and remove devices on phone lines that may cause interference. Most residential customers receive older, asymmetric DSL (ADSL) service with download speeds of 1.5 Mbps to 3 Mbps. ADSL technology can achieve speeds of up to 8 Mbps over short distances. Newer DSL technologies can support services with much higher download speeds. Satellite. Three providers currently offer broadband service in the United States. These providers use geosynchronous satellites that orbit in a fixed position above the equator and transmit and receive data directly to and from subscribers. Satellite companies provide transmission from the Internet to the user’s computer and from the user’s computer to the Internet, eliminating the need for a telephone connection. Typically a consumer can expect to receive (download) at a speed of about 1 Mbps and send (upload) at a speed of about 200 kbps. Transmission of data via satellite causes a slight lag in transmission, typically one-half to three- fourths of a second, thus rendering this service less suitable for certain Internet applications, such as videoconferencing. While satellite broadcast service may be available throughout the country, it generally costs more than most other broadband modes and its use requires a clear line of sight between the customer’s antenna and the southern sky. Both the equipment necessary for service and recurring monthly fees are generally higher for satellite broadband service, compared with most other broadband transmission modes. Wireless. Land-based, or terrestrial, wireless broadband connects a home or business to the Internet using a radio link. Some wireless services are provided over unlicensed radio spectrum and others over spectrum that has been licensed to particular companies. In licensed bands, some companies are offering fixed wireless broadband throughout cities. Also, mobile telephone carriers—such as the large companies that provide traditional cell phone service—have begun offering broadband mobile wireless Internet service over licensed spectrum—a service that allows subscribers to access the Internet with their mobile phones or laptops in areas throughout cities where their provider supports the service. A variety of broadband-access technologies and services also are provided on unlicensed spectrum—that is, spectrum that is not specifically under license for a particular provider’s network. For example, wireless Internet service providers may offer broadband access in particular areas by establishing a network of subscriber stations, each with its own antenna that relays signals throughout a neighborhood and has a common interface to the Internet. Subscribers place necessary reception equipment outside their homes that transmits and receives signals from the nearest antenna. Also, wireless fidelity (Wi-Fi) networks—which provide broadband service in so-called “hot spots,” or areas within a radius of up to 300 feet—can be found in cafes, hotels, airports, and offices. Hot spots generally use a short-range technology that provides speeds up to 54 Mbps. Some technologies, such as Worldwide Interoperability for Microwave Access (known as WiMAX), can operate on either licensed or unlicensed bands, and can provide broadband service up to approximately 30 miles. Fiber. This technology, also known as fiber optic, is a newer technology for providing broadband service. Fiber optic technology converts electrical signals carrying data to light and sends the light through transparent glass fibers about the diameter of a human hair. Fiber can transmit data at speeds far exceeding current DSL or cable modem speeds, typically by tens or even hundreds of megabits per second. Fiber optic technology may be provided in several ways, including fiber to a customer’s home or business or to a location somewhere between the provider’s facilities and the customer. In the latter case, the last part of the connection to the customer’s premises may be provided over cable, copper loop, or radio technology. Such hybrid arrangements may be less costly than providing fiber all the way the customer’s premises, but they generally cannot achieve the high transmission speed of a full fiber-to-the- premises connection. Although broadband often is referred to as a singular entity, a variety of data speeds—ranging from 768 kbps to greater than 100 Mbps—are defined as broadband. FCC’s new categories for collecting data on broadband Internet access service are provided in table 1. FCC has primary responsibility for regulating broadband. Section 706 of the Telecommunications Act of 1996 directs FCC to encourage the deployment of advanced telecommunications capability, which includes broadband, to all Americans. Under this authority, FCC has established a minimal regulatory environment for broadband Internet access services, stating that less regulation will promote the availability of competitive broadband services to consumers. FCC, through a number of proceedings, classified broadband Internet access (regardless of the platform) as an information service—a classification that reduces regulatory requirements applicable to broadband. FCC does not have explicit statutory authority to regulate the provision of information services; however, FCC has the authority to impose regulations under what is termed its ancillary jurisdiction to regulate services that are reasonably related to its existing statutory authority. FCC has concluded that it has ancillary jurisdiction to promulgate regulations on broadband through its rule-making procedures, but it has not yet exercised this authority. FCC also has the authority to adopt broadband regulations to ensure that broadband providers are capable of providing authorized surveillance to law enforcement agencies. As part of its responsibilities, FCC has periodically issued a report to Congress on the status of advanced telecommunications capability in the United States. To assist in the preparation of this report, in 2000, FCC adopted a semiannual reporting requirement for facilities-based broadband Internet service providers. In November 2004, FCC modified its rules on filing this information, and the revised rules went into effect for the companies’ second filing in 2005. Specifically, FCC removed existing reporting thresholds, and companies were required to report their total state subscribership by technology. In 2006, we reported that the approach FCC then used to collect data on broadband deployment, which counted broadband service providers with subscribers at the ZIP code level, resulted in inadequate information about broadband deployment. Subsequent to our recommendation, in March 2008, FCC acted to increase the precision and quality of its broadband data by revising its methodology and requiring that broadband providers report the number of broadband connections in service by Census Tract. Furthermore, the Broadband Data Improvement Act calls for additional actions to improve the quality of data available on broadband deployment. Among other things, the Act directs FCC to (1) shift its assessments of broadband deployment from a periodic basis to an annual basis; (2) periodically survey consumers to collect information on the types of technologies used by consumers to access the Internet, the applications or devices used in conjunction with broadband service, and the actual connection speeds of users; (3) collect information on reasons why consumers have not subscribed to broadband services; (4) determine certain demographic data for geographical areas not served by any provider of advanced telecommunications capability (i.e., areas where broadband has not yet been deployed); and (5) provide information on the speed and price of broadband service capability in 25 other countries. Two other federal agencies have responsibility for telecommunications policies. The Office of Science and Technology Policy (OSTP) within the Executive Office of the President has a broad mandate to advise the President and the federal government on the effects of science and technology on domestic and international affairs and has led interagency efforts to develop science and technology policies and budgets. The Department of Commerce’s National Telecommunications and Information Administration (NTIA) is the President’s principal telecommunications and information adviser and works with other executive branch agencies to develop the administration’s telecommunications policies. Agency officials we spoke with during the Bush Administration told us that the market-based U.S. policy on broadband deployment could be found in, or had been shaped by, various statutes, presidential speeches, regulations, and reports. For example: Congress passed the Telecommunications Act of 1996 to encourage the deployment of advanced telecommunications capability, which includes broadband, “ preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation.” In a speech delivered in March 2004, President Bush stated “that there should be universal, affordable access to broadband by 2007 and that, as soon as possible thereafter, the country should make sure that consumers have got plenty of choices for their broadband carriers.” In 2004, FCC modified regulations applicable to local telephone companies in order to expand incentives for them to invest in network upgrades. In a series of orders, FCC ruled that incumbent local telephone companies did not have to make certain elements of their fiber networks serving residential customers available to competitors at cost-based rates. A 2008 NTIA report reaffirmed President Bush’s vision of universal broadband access by noting, “rom its first days, the Administration has implemented a comprehensive and integrated package of technology, regulatory, and fiscal policies designed to lower barriers and create an environment in which broadband innovation and competition can flourish.” The Broadband Data Improvement Act of 2008 was enacted to “improve the quality of Federal and State data regarding the availability and quality of broadband services and to promote the deployment of affordable broadband services to all parts of the Nation.” Officials at OSTP, FCC, and NTIA during the Bush Administration told us that the current federal broadband policy was market-based; OSTP told us that the Bush Administration had implemented fiscal, technology, and regulatory policies based on the recognition that a competitive marketplace provides the best environment for achieving the United States’ broadband goals, and competitive markets should be deregulated; an official at FCC characterized FCC’s broadband policy in recent years as one that reduced barriers to entry, lessened regulation of broadband, and encouraged investment; and NTIA told us that federal broadband policies of the past few years flow from an early speech made by President Bush that emphasized the deployment of broadband, and that NTIA has executed initiatives to remove economic disincentives. Furthermore, according to these officials, the role of the government in carrying out this policy was to create market incentives and remove barriers to competition; the role of the private sector was to fund the deployment of broadband. Accordingly, FCC, OSTP, and NTIA officials told us they took a number of steps to open markets and encourage competition. OSTP officials told us their agency has played the leading role in crafting and coordinating the administration’s broadband policy, including federal efforts to support new wireline and wireless broadband technologies. Moreover, OSTP has recommended policies to make additional spectrum available for new wireless broadband technologies. In addition, FCC, through a number of proceedings, classified broadband Internet access (regardless of the platform) as an information service. This classification reduces regulatory requirements applicable to broadband, which FCC stated would encourage broadband deployment and promote local competition. NTIA also took action to encourage broadband deployment by increasing the amount of spectrum available for advanced services and by clearing away regulatory obstacles to promote investment. Under this market-based policy, broadband infrastructure has been extensively deployed in the United States. Representatives of broadband providers told us this market-based approach to deployment has encouraged investment in broadband infrastructure and has been instrumental in getting this technology deployed to most of the homes in the United States. Although a precise assessment of broadband deployment in the United States is not possible because of data limitations, federal officials and industry representatives estimate that about 90 percent of American homes now have access to broadband. However, gaps remain, primarily in rural areas, because the market does not support private broadband infrastructure investment in low-density areas. For example, officials from several states said that rural areas in their states often lack broadband service. Representatives of both a provider association and a consumer organization told us that these areas lack broadband infrastructure because they offer little profit potential. To ensure broadband access to all Americans, in the Food, Conservation, and Energy Act of 2008 (the Farm Bill), Congress required FCC to develop, in consultation with the Secretary of Agriculture, a comprehensive rural broadband strategy. FCC must submit a report to Congress by the end of May 2009 that describes a comprehensive rural broadband strategy. The report is to include, among other things, recommendations on how to coordinate federal rural broadband initiatives and how federal programs can best respond to rural broadband requirements and overcome obstacles that currently impede rural broadband deployment. In March 2009, FCC and NTIA officials told us that the federal policy on broadband deployment is changing as a new administration and Congress form their telecommunications agenda and as federal agencies work to implement recent legislation. As evidence of this change in focus, FCC and NTIA officials highlighted the new funding and responsibilities the Recovery Act has given to federal agencies to increase broadband availability, including developing a national broadband plan. The Recovery Act broadband provisions will be discussed later in this report. Eleven federal programs administered by six federal agencies help fund telecommunications infrastructure deployment, but just 2 of these programs—Rural Broadband Access Loans and Loan Guarantees program and the Community Connect Grant program—focus specifically on broadband infrastructure deployment. Both programs are administered by the Department of Agriculture’s Rural Development, Utilities Program (RDUP). In 2008, these 2 programs provided a combined total of about $300 million for broadband infrastructure deployment. The remaining 9 programs provided over $7 billion for the deployment of various types of telecommunications infrastructure, including broadband, in 2008. However, because these 9 programs fund telecommunications infrastructure deployment generally, and not broadband specifically, the responsible federal agencies do not systematically track the amount of funding provided for broadband infrastructure deployment. Most of these 11 federal programs focus on helping deploy telecommunications infrastructure, including broadband, to rural areas. For example, the largest program at FCC, the Universal Service High Cost program, and the largest program at RDUP, Telephone Loans and Loan Guarantees program, help incumbent local exchange carriers pay for the installation of and upgrades to telecommunications infrastructure, such as poles, lines, and switches, in rural areas. Table 2 provides additional information about all 11 programs. Although several federal programs provide funding for the deployment of telecommunications infrastructure, including broadband, there are processes and procedures in place to help coordinate agency efforts. One of these is the Office of Management and Budget’s financial status report form, which must be completed by all applicants for federal funding and requires applicants to disclose sources of funding. Another is the agency application process, such as the one used by the U.S. Department of Agriculture (USDA)/RDUP, which states that applicants must list on their application all sources of federal funding they are currently receiving. Agencies also work closely together, keeping each other informed of current programs and applicants. For example, officials at the Economic Development Administration (EDA) told us that EDA coordinates with RDUP to help establish connections between broadband infrastructures deployed in rural areas, which RDUP can fund, while EDA itself funds infrastructure in more urban areas, which RDUP is prohibited from supporting. Another example of cooperation between agencies is evident in the Web sites. One site dedicated to broadband opportunities in rural America is a joint initiative of FCC and USDA. This site, hosted by FCC, lists programs overseen by USDA as well as FCC, both of which provide funding for broadband deployment in rural areas. Another Web site, used by the Appalachian Regional Commission, provides information about the numerous federal agencies with which the Commission works in the process of administering grants. In addition to these 11 programs that fund the deployment of telecommunications infrastructure, other federal programs fund various aspects of broadband technology or use, but do not specifically support the deployment of infrastructure. For example, the Department of Education as well as the Institute of Museum and Library Services have programs that provide financial assistance for telecommunications development, but program officials told us these programs are used to develop training for using broadband or to purchase content requiring broadband access, not for broadband deployment. (App. III provides information on these other federal programs.) Finally, other federal agencies fund broadband infrastructure deployment, but this infrastructure is not for public access. For example, the Department of Defense developed its own nonpublic broadband communications network. Industry stakeholders credit federal programs with helping to increase the deployment of broadband infrastructure throughout the United States. In particular, stakeholders noted that FCC’s Universal Service High Cost Program and its Universal Service Schools and Libraries (E-Rate) Program, as well as all of RDUP’s loans and grants programs have been critical in increasing broadband deployment, especially in rural areas. For example, one industry representative credited FCC’s Universal Service High Cost program with helping to finance fiber deployment in rural areas; two industry representatives credit RDUP’s programs with helping to deploy broadband, with one representative crediting RDUP’s programs with increasing broadband deployment by lowering broadband costs. State officials we interviewed expressed similar views on these programs. For example, Arkansas officials said that federal assistance from RDUP had been useful in deploying broadband to rural and economically challenged areas of their state. Despite the gains achieved through these programs, provider representatives and consumer advocates both told us that additional federal investment—through such mechanisms as loans, grants, or tax incentives—will likely be required to make broadband universally available. Industry representatives estimate that roughly 90 percent of Americans now have access to broadband at home, work, or through other community access points. However, getting broadband to the remaining 10 percent will be expensive, primarily because they live in rural areas. Representatives of provider companies told us that the cost of deploying broadband infrastructure in rural, low-density areas is the reason some homes do not have access. According to one representative, providing wireline service to the last 5 percent of homes will be too expensive; in low-density areas, he said it would make more sense to provide service via some type of community access program or wireless infrastructure. Although a lack of detailed information on the current state of deployment makes it difficult to determine the costs of deploying broadband infrastructure to unserved or underserved areas, estimates range from under $10 billion to over $30 billion. Several factors can influence the cost of deployment, including the terrain, speed of the service provided, and technology employed (e.g., wireline or wireless technology). Because companies may not earn a sufficient return on their investment, some industry representatives and state Chief Information Officers (CIO) told us the federal government would likely need to subsidize broadband deployment to certain unserved or underserved areas to achieve universal access. Additional federal investments in broadband deployment, however, do not necessarily guarantee increased adoption. Representatives from four organizations that provide broadband told us that between 80 percent and 90 percent of the residences in their service areas had access to broadband, but fewer than 60 percent subscribed; for some providers, the subscribership rate was less than 40 percent. A recent study on broadband subscribership found similar patterns. Specifically, the Pew Internet and American Life Project found that 75 percent of Americans use the Internet; 57 percent use the Internet at home through broadband, 9 percent use the Internet at home through dial-up connections, and 8 percent use the Internet from work or the library. The report also found that some Americans, particularly elderly or low-income persons, choose not to use the Internet, even when broadband technology is available. The Pew report identified several reasons why people choose not to use the Internet, including cost and lack of interest. The Recovery Act provides $7.2 billion to increase broadband availability in the United States and establishes universal access to broadband capability as a national goal. More specifically, the Recovery Act provides funding for (1) NTIA to develop a broadband inventory map; (2) FCC to develop a national broadband plan; (3) NTIA, in consultation with FCC, to establish a grants program—referred to as the Broadband Technology Opportunities Program—to expand broadband services to rural and underserved areas and improve access to broadband by public safety agencies; and (4) RDUP to issue loans, loan guarantees, and grants to increase rural broadband availability. The Recovery Act further requires that FCC, in developing the national broadband plan, include benchmarks, a detailed strategy for achieving affordable broadband service, and an evaluation of the progress of projects funded through the Recovery Act. Although the Recovery Act assigns lead responsibilities among the agencies for these different broadband initiatives, these responsibilities are not mutually exclusive. The agencies will need to take each other’s efforts into account while carrying out their individually assigned tasks. For example, NTIA’s broadband inventory data will enable FCC to identify the areas with the largest unserved or underserved populations, allowing FCC to tailor the plan it develops accordingly. Given their overlapping responsibilities, it will be important for FCC, RDUP, and NTIA to coordinate their efforts. We have previously reported on the importance of coordinating federal efforts, especially when these efforts target the same population, to prevent duplication and fragmentation of effort. This potential for overlap and fragmentation underscores the importance for the federal government of developing the capacity to more effectively coordinate crosscutting program efforts. Furthermore, we have noted that agencies can enhance and sustain their collaborative efforts by developing a strategy that includes necessary elements for a collaborative working relationship, such as defining and articulating a common outcome; identifying and addressing needs by leveraging resources; agreeing on roles and responsibilities; establishing compatible policies, procedures, and other means to operate across agency boundaries; and developing mechanisms to monitor, evaluate, and report on results. In commenting on a draft of this report, OSTP stated that the current administration recognizes the need for extensive coordination among the agencies. A number of the OECD nations that lead the United States in subscribership have broadband policies that are more detailed than the U.S. policy and often include timelines, action plans, and some performance metrics. For example: South Korea’s 2006 E-Korea Master Plan has established a goal that every household, regardless of income, is to be equipped with access to the Internet, with a minimum transmission speed of 1 Mbps. The plan created the following objectives: (1) maximize the ability of all citizens to use information and communication technologies to actively participate in the information society, (2) strengthen global competitiveness , (3) realize a smart government structure with high transparency and productivity by increasing the use of information and communication technologies, (4) facilitate continued economic growth by promoting the information technology industry and advancing the information structure, and (5) become a leader in the global infrastructure by taking a major role in international cooperation. South Korea’s plan also established timelines for online services to be expanded to include all civil services and customized digital civil services by 2006, policy plans to achieve a 90 percent penetration rate for the entire population by 2006, and an evaluation system that measured the information utilization and communications technology needed to meet those objectives. Embassy officials noted that as of 2008, 99.82 percent of households in Korea have broadband access. Finland’s National Broadband Strategy calls for making broadband available to 93 percent of the country’s residents by 2009 and established the following goals: (1) promote competition within and between all communications networks, (2) promote the provision of electronic services and content to stimulate demand for broadband services, and (3) continue and develop special support measures in those areas in which there is insufficient demand for the commercial supply of broadband facilities. Finland’s written policy also identified 50 individual measures with timelines and responsible agencies for use as metrics for assessing progress in achieving the defined goals. For example, the Ministry of Education was responsible for ensuring that all schools have access to reasonably priced and efficient telecommunications by 2008. Embassy officials noted that except for the most remote schools in the far north, all schools have broadband access. In contrast, the current U.S. policy, which is articulated in multiple sources, does not include performance measures and an action plan for implementation. The attributes of the other nations’ written policies align with the framework set forth by Government Performance and Results Act of 1993 (GPRA). GPRA stresses the importance of having clearly stated objectives, strategic and performance plans, goals, performance targets, and measures in order to improve a program’s effectiveness, accountability, and service delivery. Specifically, performance measures allow an agency to track its progress in achieving intended results. Performance measures also can help inform management decisions about such issues as the need to redirect resources or shift priorities. In addition, stakeholders, such as telecommunication providers and consumer groups, can use performance measures to hold agencies accountable for results. In commenting on a draft of this report, OSTP said it was working with several other agencies to develop such metrics. Several countries such as South Korea, Canada, and Sweden have provided financial support to spur broadband deployment in rural or underserved areas, provided incentives to private companies to build networks, and enacted a number of efforts to increase broadband subscribership and digital literacy. For example, the South Korean government established several agencies to promote broadband access in both the public and the private sector by, for instance, providing training to all citizens, including the elderly and disabled, to increase their “digital literacy” (i.e., knowledge needed to use the Internet). Canada, in 2002, provided support for rural access through the Broadband for Rural and Northern Development (BRAND) program, with funding of $80 million to eligible communities for broadband infrastructure projects. BRAND recommended that the government complement market forces with well- targeted government initiatives, particularly focusing on communities in areas that the market is unlikely to serve. Similarly, Sweden provided subsidies for broadband infrastructure development through grants and tax relief, including funding for rural broadband deployment. In addition, the Swedish government increased demand for broadband through digital literacy programs for small and medium-sized businesses, libraries, and schools. Officials from 48 states and the District of Columbia reported wide variation in their approaches to increasing the level of broadband deployment in their states. More than half of the state CIOs (or their designees) we spoke with told us they were aware of gaps in broadband deployment within their states. To address these gaps, CIOs said they were considering or had taken a variety of actions, including mapping, planning, and allocating funds. Mapping broadband deployment. Twelve state CIOs reported that their states have mapped broadband deployment, and 2 of these states, California and Massachusetts, have each mapped both the speed and the availability of broadband in their state and placed the information on their state’s Web site. CIOs from another 13 states told us they were planning to map their states in the near future. Developing broadband deployment plans. Twelve state CIOs told us their states have publicly available broadband deployment plans, some of which include strategies to increase deployment. For example, Utah’s plan provides grants to providers to increase the deployment of broadband in rural areas. Vermont has created the Vermont Telecommunications Authority, designed to build public-private partnerships with service providers, and is working on cellular and broadband models with the goal of 100 percent access by 2010. Lastly, Maryland has defined regions of the state in need of broadband and has provided some funding to the Maryland Broadband Cooperative for the installation of fiber backbone infrastructure. In addition to these existing plans, CIOs from 6 states said they are in the process of developing broadband deployment plans. Allocating funds for broadband deployment. Fourteen state CIOs told us their states had provided some type of financial support to local providers, state cooperatives, or state agencies for broadband deployment, ranging from bonds to grants to appropriations from state budgets. In addition, some states have provided tax incentives to local providers for the provision of broadband, particularly in unserved or underserved areas. For example, Mississippi provides investment tax credits to those companies investing in the state, ranging from 5 percent to 15 percent over 10 years, and gives the highest credits for investment in the least populous areas of the state. Stimulating demand for broadband. CIOs in several states expressed concern about the low level of broadband subscribership in their states and have taken action to stimulate demand. For example, Nebraska is providing information and training to people in rural communities using the Nebraska Business Information Technology mobile classroom for high-speed technology education. South Carolina, to encourage broadband subscribership, has a program to distribute laptops among students in grades 9 through 12 and also offers computer training in its continuing education classes. With extensive private-sector investment and minimal government intervention, some type of broadband infrastructure has been deployed to approximately 90 percent of U.S. households. Bringing this infrastructure to the remaining unserved or underserved regions will, by most estimates, cost tens of billions of dollars and will likely require federal investment because of the low profit potential in these areas. The recently enacted American Recovery and Reinvestment Act establishes universal access to broadband as a goal and provides federal funding to RDUP and NTIA for grants and loans, to NTIA for mapping broadband infrastructure, and to FCC for developing a national plan for broadband deployment. These efforts will help guide federal involvement in deploying broadband in the coming years. Additionally, the efforts complement each other. NTIA’s data will allow all agencies to identify and cost-effectively target federal funds to the areas with the largest unserved or underserved populations and will inform the plan developed by FCC. The Recovery Act requires that the national broadband plan include some of the elements we found in written policies of OECD nations with higher broadband subscribership, including goals and benchmarks. To achieve transparency and accountability in the use of federal funds, FCC will need to include additional elements, such as timelines, specific performance measures, and clearly defined roles and responsibilities for the responsible federal agencies. Increasing accountability for achieving intended results is especially important given the potential costs of expanding broadband deployment to currently unserved or underserved areas. To increase transparency and accountability for results, we recommend that the Chairman of FCC, in developing the national broadband plan: consult the Secretary of Agriculture and the Assistant Secretary of Commerce and, at a minimum, specify performance goals and measures for broadband deployment, including time frames for achieving the goals and work with the Secretary of Agriculture and the Assistant Secretary of Commerce to define the roles and responsibilities for each of these agencies in carrying out the plan. We provided a draft of this report to FCC, the Department of Commerce, OSTP, and the Department of Agriculture for their review and comment. FCC and the Department of Commerce provided written comments, which are reprinted in appendixes IV and V, respectively. Both agencies emphasized the current administration’s efforts to bring broadband technology to all Americans and discussed the role of the Recovery Act in realizing this goal. In its written comments, FCC recognized the need for a more definitive policy and agreed with our recommendations that performance measures and greater coordination to define roles and responsibilities are important to its implementation. In its written comments, the Department of Commerce emphasized that it is working closely with the Department of Agriculture and FCC to ensure the success of the President’s broadband initiatives and noted that, to some extent, the Recovery Act defined the roles and responsibilities of each agency involved in the development and implementation of a national broadband deployment plan. We recognize in the report that the Recovery Act assigns lead responsibilities to the agencies for different broadband initiatives; however, given that these responsibilities are not mutually exclusive, we continue to believe further delineation of the roles and responsibilities is warranted. FCC, the Department of Commerce, and OSTP, through the National Economic Council, provided technical comments, which we incorporated as appropriate. The Department of Agriculture responded through RDUP that it did not have any comments on the draft report. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Chairman of the Federal Communications Commission and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Contact information and major contributors to this report are listed on appendix VI. Although the Organisation for Economic Co-operation and Development’s (OECD) rankings are an important source of information on the status of broadband in many countries, OECD is not the only organization that measures broadband deployment and subscribership, and the OECD metric we have discussed—subscribership per 100 inhabitants—is not the only available metric. Other ranking organizations include the Information Technology and Innovation Foundation (ITIF) and Web Site Optimization, and their metrics include the percentage of households that subscribe to broadband and the percentage of households that have access to broadband. In addition, OECD uses other metrics to assess the status of broadband in many countries, such as broadband affordability and download speeds. Figure 1 compares broadband rankings for the United States and other OECD countries. The figure includes OECD’s second quarter 2008 rankings of subscribership per 100 inhabitants and ITIF’s and Web Site Optimization’s rankings. The figure shows that while the United States ranks 15th in the number of subscribers per 100 inhabitants, it ranks 10th and 11th in the other reports in the percentage of households that subscribe to or have access to broadband. As the figure indicates, countries’ rankings vary with the metric used. For example, while Japan ranks second in ITIF’s composite score of subscribership, speed, and price, it places 17th in OECD’s June 2008 ranking of subscribership per 100 inhabitants. Similarly, South Korea, which ITIF ranks first, with 93 percent household penetration, is 7th in OECD’s June 2008 ranking of subscribers per 100 inhabitants. In an April 24, 2007, letter to OECD, U.S. Ambassador David Gross took issue with the methodology on which OECD’s new ranking was based, particularly because it does not include people who gain access to broadband services through multiple platforms and access points, such as college students and others who use “Wi-Fi hotspots.” To determine the current federal broadband policy, we interviewed officials at the Office of Science and Technology Policy (OSTP), the National Telecommunications and Information Administration (NTIA), and the Federal Communications Commission (FCC), and reviewed recent reports by FCC and NTIA. To learn about the broadband policies of those countries that the OECD, in June 2008, ranked ahead of the United States in broadband subscribership per 100 residents, we contacted each country’s embassy in the United States. We requested information from embassy officials on whether their country’s current broadband policy included the following: a written policy, a timeline, an action plan, goals, and performance measures. We selected these items because the Government Performance and Results Act of 1993 (GPRA) emphasizes these elements as important for the effective and efficient management of government programs. To determine the principal federal programs that support the deployment of broadband infrastructure, we reviewed a Congressional Research Service (CRS) report to Congress, Broadband Internet Access and the Digital Divide: Federal Assistance Programs, updated June 4, 2008, which lists federal domestic assistance that can be associated with telecommunications development, including broadband deployment. This list includes 11 federal agencies and 23 federal programs. After an initial review of this list and some preliminary audit work, we reduced this list to 19 programs administered by a total of 8 federal agencies. We interviewed federal officials at all 8 agencies listed by CRS and reviewed information about their programs and determined that 5 agencies and commissions overseeing a total of 10 programs specifically fund the deployment of telecommunications infrastructure, including broadband infrastructure. To obtain various stakeholders’ views on how federal programs have affected broadband infrastructure deployment, we interviewed officials of associations that represented wireless providers and telecommunications and cable companies, large and small, urban and rural. We also interviewed officials of organizations representing consumers, including those who are economically disadvantaged. For both provider and consumer representatives, we developed and used sets of questions about their views on current federal policy and programs, the current status of broadband deployment and subscribership, the level of competition, the reasons for the lack of access to broadband in some areas, and suggestions for improvements in the current federal programs. The organizations and associations whose representatives we interviewed are as follows: Alliance for Public Technology (APT) American Cable Association Connected Nation Consumer Federation of America National Association of State Utility Consumer Advocates (NASUCA) One Economy Organization for the Promotion and Advancement of Small Telephone Companies (OPASTCO) PEW Internet Project Rural Independent Competitive Alliance (RICA) The Wireless Association (CTIA) Wireless Internet Service Providers Association (WISPA) To learn the states’ views on the federal government’s efforts to increase broadband infrastructure deployment as well as actions the states have taken to encourage broadband deployment, we developed a set of questions in consultation with GAO methodologists and used them to interview each state’s Chief Information Officer (CIO) or designee. We interviewed the CIOs in 48 of the 50 states and the District of Columbia. Two states were unavailable because of internal issues. We conducted these interviews from August 15, 2008, until February 6, 2009, and sought information on state officials’ views on the current federal broadband policy and programs, how they could be improved, and what actions the state governments had taken to increase broadband deployment. We selected the CIOs as the most knowledgeable source of information about state broadband activities based on our understanding that broadband is not regulated by state utility commissions and our conversation with representatives of the National Association of State Chief Information Officers. We conducted this performance audit from March 2008 through May 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To determine the principal federal programs that support the deployment of broadband infrastructure, we reviewed the CRS report that identifies federal domestic assistance that can be associated with telecommunications development, including broadband deployment. This report identified a total of 23 programs administered by 11 agencies. Based on information in the CRS report and initial conversations with agency representatives, we removed 7 programs and 3 agencies: We determined that 2 programs should not be included because they did not provide funding for telecommunications infrastructure that could be accessed by any member of the public; we eliminated 3 more programs because the agencies told us they no longer fund any telecommunications infrastructure; lastly we removed 2 programs by subsuming them into another program, which we added at the advice of the agency. We then added 6 more programs for a total of 7 additions to the original list provided by CRS. This left a total of 23 programs administered by 8 agencies. We interviewed federal officials at all 8 agencies and examined program documentation to determine whether these programs provide financial assistance for broadband deployment. Based on our analysis, we determined that 11 programs administered by 6 agencies do provide such funding, as shown in table 2 in the main report, and 12 programs administered by 3 agencies do not (ILMS programs are listed in both tables). Table 3 identifies the agencies and programs that do not fund telecommunications infrastructure. In addition to the contact named above, Nikki Clowers and Faye Morrison, Assistant Directors; Stephen Brown; Elizabeth Curda; Sharon Dyer; Kevin Egan; Elizabeth Eisenstadt; David Hooper; Hannah Laufe; Sara Ann Moessbauer; Josh Ormond; Madhav Panwar; and Nancy Zearfoss made key contributions to this report.
The United States ranks 15th among the 30 democratic nations of the Organisation for Economic Co-operation and Development (OECD) on one measure of broadband (i.e., high-speed Internet) subscribership. The Federal Communications Commission (FCC) has regulatory authority over broadband, and several federal programs fund broadband deployment. This congressionally requested report discusses (1) the federal broadband deployment policy, principal federal programs, and stakeholders' views of those programs; (2) how the policies of OECD nations with higher subscribership rates compare with U.S. policy; and (3) actions the states have taken to encourage broadband deployment. To address these objectives, GAO analyzed the broadband policies of the United States and other OECD nations, reviewed federal program documentation and budgetary information, and interviewed federal and state officials and industry stakeholders. According to federal officials, the federal approach to broadband deployment is focused on advancing universal access. Federal officials said that historically the role of the government in carrying out a market-driven policy has been to create market incentives and remove barriers to competition, and the role of the private sector has been to fund broadband deployment. Under this policy, broadband infrastructure has been deployed extensively in the United States. However, gaps remain, primarily in rural areas, because of limited profit potential. Eleven federal programs help fund telecommunications infrastructure deployment, particularly in rural areas, and two of these programs, administered by the Department of Agriculture's Rural Development Utilities Program (RDUP), focus specifically on broadband infrastructure deployment. Industry stakeholders credit federal programs with helping to increase broadband deployment, particularly in rural areas, but told GAO that because of the high cost and low profit potential of providing broadband services in rural areas, the federal government will likely need to provide additional funding to achieve universal access. The American Recovery and Reinvestment Act of 2009 provides more than $7 billion to the Department of Commerce's National Telecommunications and Information Administration (NTIA), FCC, and RDUP, to map broadband infrastructure in the United States, develop a plan for broadband deployment, and issue loans and grants to fund broadband access and availability in rural areas. This funding will greatly increase the potential for achieving universal access, but overlap in responsibilities for these new broadband initiatives makes coordination among the agencies important to avoid fragmentation and duplication. Current administration officials said they are still formulating their telecommunication agenda. In comparison to the policies of several other OECD countries with higher broadband subscribership rates per 100 inhabitants, the U.S. policy lacks elements identified by the Government Performance and Results Act of 1993 as essential to achieving effective and efficient policy outcomes. Specifically, according to officials of these countries' governments, several of the OECD nations with higher rankings have written broadband policies, action plans, goals, and performance measures. A number of these other countries also have provided financial support, created financial incentives, or taken other steps to promote broadband. In interviews with state officials, GAO learned that states vary in their actions to encourage deployment. Officials in more than half the states cited gaps in broadband deployment and said their states were considering or had taken actions to address these gaps. Officials in 12 states said they had mapped their states and 13 more said they had plans to map; officials in 12 states said they have broadband deployment plans; and officials in 14 states said they have provided some type of financial support for broadband deployment.
In recent years it has become clear that past fire suppression policies have not worked as effectively as was once thought. In fact, they have had major unintended consequences, particularly on federally owned lands. For decades the federal wildland fire community followed a policy of suppressing all wildland fires as soon as possible. As a result, over the years, the accumulations of brush, small trees, and other hazardous vegetation (underbrush) in these areas increased substantially. Since about one-third of all land in the United States is federally owned and consists largely of forests, grasslands, or other vegetation, the widespread buildup of this underbrush has created a national problem. Today, when a fire starts on federal lands, accumulated underbrush could act as fuel that leads to larger and more intense fires than would otherwise be the case. Accumulated underbrush, in turn, causes fires to spread more rapidly. This combination of factors greatly heightens the potential for fires to become catastrophic. As several recent studies have pointed out, without changes in the way federal agencies prepare for and respond to wildland fires, communities that border fire-prone lands—commonly known as the wildland-urban interface—will increasingly be at risk for fire damage. The 2000 fire season demonstrated the impact of past fire policies. In that year one of the most challenging on record large numbers of intense and catastrophic fires frequently surpassed the fire-fighting capacities of federal, state, and local agencies. Many of these fires became the out-of- control disasters that routinely led national television news broadcasts as they threatened or damaged the communities in their path. While most of these fires occurred in western states, other areas of the country were also affected. These recent experiences have led the fire-fighting community across the country and policymakers at all levels of government to call for federal action to help mitigate this growing threat. The Forest Service and Bureau of Land Management are the two major federal land management fire-fighting agencies. The Forest Service manages about 192 million acres of land in 155 national forests and grasslands, and the Bureau of Land Management manages about 264 million acres of land. Also involved are the National Park Service, the Bureau of Indian Affairs, and the Fish and Wildlife Service within the Department of the Interior. Together, these agencies are caretakers of over one-third of all the land in the United States. The five land management agencies developed the National Fire Plan. The plan consists of five key initiatives: Firefighting—Ensure adequate preparedness for future fire seasons, Rehabilitation and Restoration—Restore landscapes and rebuild communities damaged by wildland fires, Hazardous Fuel Reduction—Invest in projects to reduce fire risk, Community Assistance—Work directly with communities to ensure Accountability—Be accountable, and establish adequate oversight and monitoring for results. The plan is expected to be a long-term effort to be implemented over a 10- year period. While the agencies are to use funding provided under the National Fire Plan to implement all five aspects of the Plan, they are to use the majority of these funds to increase their capacity for fire-fighting preparedness and suppression by acquiring and maintaining additional personnel and equipment. Agencies use preparedness funding at the beginning of each fire season to place fire-fighting resources in locations where they can most effectively respond to fires that start on federal lands. Agencies use fire suppression funding to control and extinguish wildland fires. This effort includes supporting fire-fighting personnel and equipment on the fire line and at the established fire camp. The Forest Service and Interior have not effectively determined the level of fire-fighting personnel and equipment they need to fight wildland fires. As a result, they may not be as prepared as they could be to manage fires safely and cost-effectively. In managing wildland fires, the agencies rely primarily on (1) fire management plans, which contain information on how wildland fires should be fought, and (2) computer planning models that use the planning information to identify the most efficient level of personnel and equipment needed to safely and effectively fight fires. Of the five major federal land management agencies, only the Bureau of Land Management has fully complied with the fire policy requirement that all burnable acres have fire management plans. Furthermore, even though the fire policy calls for the agencies to coordinate their efforts, the Forest Service and Interior use three different computer planning models to determine the personnel and equipment needed to achieve their fire- fighting preparedness goals. Moreover, none of the models focus on the goals of protecting communities at the wildland-urban interface or fighting fires that go across the administrative boundaries of the federal agencies. Since 1995, the national fire policy has stated that fire management plans are critical in determining fire-fighting preparedness needs that is, the number and types of personnel and equipment needed to respond to and suppress fires when they first break out. Among other things, fire management plans identify the level of risk associated with each burnable acre including areas bordering the wildland-urban interface and set forth the objectives that a local forest, park, or other federal land unit is trying to achieve with fire. The plans provide direction on the level of suppression needed and whether a fire should be allowed to burn as a natural event to either regenerate ecosystems or reduce fuel loading in areas with large amounts of underbrush. In addition, fire management plans provide information that is entered into computer planning models to identify the level of personnel and equipment needed to effectively fight fires and ultimately help to identify the funding needed to support those resources. As of September 30, 2001, 6 years after the national fire policy was developed, over 50 percent of all federal areas that were to have a fire management plan consistent with the requirements of the national fire policy were without a plan. These areas did not meet the policy’s requirements because they either had no plans or had plans that were out of date with the policy requirements because, among other things, they did not address fighting fires at the wildland-urban interface. Table 1 shows, as of September 30, 2001, the Bureau of Land Management was the only agency with all of its acreage covered by a fire management plan that was compliant with the policy. In contrast, the percent of units with noncompliant plans ranged from 38 percent at the Fish and Wildlife Service to 82 percent at the National Park Service. When we asked fire managers why fire management plans were out of date or nonexistent, they most often told us that higher priorities precluded them from providing the necessary resources to prepare and update the plans. Without a compliant fire management plan, some of these fire managers told us that their local unit was following a full suppression strategy in fighting wildland fires, as the current fire policy requires. That is, they extinguish all wildland fires as quickly as possible regardless of where they are without considering other fire management options that may be more efficient and less costly. Other fire managers told us that while their fire management plans were not in compliance with the national policy, they were still taking action to ensure their day-to-day fire- fighting strategy was following the more important principles outlined in the current policy, such as addressing the fire risks around communities in the wildland-urban interface. A January 2000 Forest Service report clearly demonstrates the importance of adequate fire management planning in determining the level of fire- fighting personnel and equipment needed. In this report, Forest Service officials analyzed the management of two large wildland fires in California that consumed 227,000 acres and cost about $178 million to contain. Fire managers at these fires did not have fire management plans that complied with the national fire policy. The report stated that a compliant fire management plan would have made a difference in the effectiveness of the suppression efforts. For example, without a fire management plan, the local fire managers were not provided with a “let burn” option. Had this option been available, it could have reduced the need for personnel and equipment for one of the fires and lowered total suppression costs. The Forest Service and Interior acknowledge the need to complete and update their fire management plans. Both agencies have initiatives underway in response to the renewed emphasis on fire management planning under the National Fire Plan. Specifically, the agencies are developing consistent procedures and standards for fire management planning that will assist local units in their efforts to have fire management plans that are in compliance with the national fire policy. The agencies are expected to have a strategy in place by the spring of 2002 for accomplishing this objective. However, developing the procedures and standards and incorporating them into fire management plans at all local units is not likely to occur until 2003, at the earliest. Because it has been 7 years since the 1995 policy first directed agencies to complete their fire management plans, and the agencies have given the issue low priority, it is critical that the Forest Service and Interior complete this initiative as expeditiously as possible. Fire management planning decisions about the amount and types of personnel and equipment needed to reach a given level of fire-fighting preparedness are based on computer planning models that the Forest Service and the Interior agencies have developed. The national fire policy directs the agencies to conduct fire management planning on a coordinated, interagency basis using compatible planning processes that address all fire-related activities without regard to the administrative boundaries of each agency. This level of interagency coordination is not now being achieved because of historical differences in the missions of the five land management agencies. The Forest Service and Interior agencies are currently using three different computer planning models to identify the personnel and equipment needed to respond to and suppress wildland fires. As a result, each model reflects different fire-fighting objectives and approaches in calculating the level of resources needed to fight fires safely and cost-effectively in terms of its own mission and responsibilities. This disparate approach is inconsistent with the current national fire management policy, which calls upon the agencies to use a coordinated and consistent approach to fire management planning. More importantly, each of the models only considers the fire-fighting resources available on the lands for which the agency has direct fire protection responsibilities. According to agency officials, this approach has been the general practice for fire management planning. Fire protection of nonfederal lands, including lands in the wildland-urban interface that pose direct risks to communities, are not incorporated into the models. Yet, as set out in the national fire policy, these are the areas that are currently the focus of determining appropriate fire preparedness levels. Moreover, since wildland fires do not respect agency or other administrative boundaries, the policy states that fire management planning must be conducted across federal boundaries, on a landscape scale. However, none of the models are currently designed to achieve this objective. Because the models focus only on federal lands and the personnel and equipment available at the local unit, they do not consider the fire-fighting resources that are available from state and local fire authorities. These resources could decrease the need for federal fire- fighting personnel and equipment in certain areas. As a result of these problems with the computer models, the Forest Service and Interior are not able to adequately determine the number of fire-fighting personnel and equipment needed to meet fire-fighting policy objectives in the most cost- effective manner. The Forest Service and Interior have acknowledged our concerns and are reviewing how best to replace the three different computer planning models currently being used. A revised system for determining the resources needed would also help the agencies be responsive to congressional concerns. Past appropriations committee reports have directed the Forest Service and Interior to provide more detailed budget submissions on fire management planning and to base these submissions on common methods and procedures. These reports also directed the agencies to have a coordinated approach for calculating readiness, including consideration of the resources available from state and local fire authorities. The agencies are in the early stages of replacing the models with an interagency, landscape-scale fire planning and budget system that is expected to provide a single, uniform, and performance-based system for preparedness and fire management planning. We are encouraged by this initiative but remain concerned over its implementation because the agencies have acknowledged that, even with aggressive scheduling, full implementation may take 4 to 6 years. Until then, fire management planning will not comply with current fire policy, continue to be conducted based on each agency’s missions, and remain focused within the boundaries of each local federal unit. While the agencies don’t have a clear sense of the total resources they need to effectively conduct their fire-fighting activities, the Forest Service and Interior have nonetheless made progress in acquiring more fire- fighting personnel and equipment with the additional funding received under the National Fire Plan. However, as of September 30, 2001, they had not reached the full level of preparedness they had identified as necessary to carry out the objectives of the plan. Most of the Interior agencies are likely to reach their full level of preparedness in fiscal year 2002, while the Forest Service and the Fish and Wildlife Service will not reach this level until 2003 or later. Prior to the initiation of the National Fire Plan, the Forest Service and Interior estimated they were at about 74 percent and about 83 percent, respectively, of their desired preparedness levels. To increase these levels, the agencies needed to hire, develop, and support additional fire managers and fire fighters; and procure more fire-fighting equipment. The funding received in fiscal year 2001 is designed to help the agencies achieve these goals. The agencies are making good progress in hiring additional personnel. As of September 30, 2001, the Forest Service had filled about 98 percent of its needed positions and the Interior agencies, in aggregate, had filled over 83 percent of their positions. Because the availability of experienced fire- fighting personnel was limited and the agencies were competing for the same personnel in many cases, the agencies were not able to hire all of the fire-fighting personnel identified as needed in fiscal year 2001. The agencies have initiated new recruiting and outreach programs and expect to hire the remaining personnel they need by the 2002 fire season. Table 2 shows the status of the agencies’ efforts in acquiring personnel. Regarding equipment, by the end of fiscal year 2002, most of the Interior agencies are likely to have all the fire-fighting equipment they identified as needed for implementing the National Fire Plan. During fiscal year 2001, the Bureau of Land Management and Bureau of Indian Affairs ordered the equipment it needed, but about 31 percent of the equipment will not be delivered until fiscal year 2002. This specialized equipment, such as fire engines and water tenders, had to be built after contracting for its purchase, which delayed its delivery. The Forest Service and the Fish and Wildlife Service have made much less progress in purchasing the equipment they said they needed to achieve their fire-fighting preparedness goals. The Forest Service did not include in its budget request all of the necessary funds to procure equipment and pay for associated costs. Forest Service officials told us that this incomplete request was an oversight on their part. This underestimate of equipment and associated costs resulted in a total budget shortfall of about $101 million in fiscal year 2001, according to Forest Service estimates. Consequently, the agency has not been able to procure hundreds of pieces of fire-fighting equipment fire engines, bulldozers, water tenders, and trucks and associated supplies for the equipment or cover expenses for some other operating costs that are required if the agency is to reach its full level of fire-fighting preparedness. Until this equipment is acquired, a few fire managers are taking measures to compensate for these shortcomings, such as contracting for needed equipment with state and private suppliers. According to the Forest Service, the agency may not attain the level of fire-fighting capacity it originally envisioned in the National Fire Plan until fiscal year 2003 at the earliest. Like the Forest Service, the Fish and Wildlife Service is not certain when it will get the equipment it identified as needed to implement the National Fire Plan. In October 2000, the agency did not take the opportunity it had to request funds for equipment to carry out the plan’s objectives. As a result, the agency did not have about $10 million it estimated needing to purchase 90 pieces of fire-fighting equipment it identified as necessary. According to Fish and Wildlife Service officials, they were not aware that they could request additional one-time funds to purchase more equipment. Fish and Wildlife Service officials also told us they have no plans to request additional funding for their equipment. In commenting on a draft of this report, the departments acknowledged that the full level of preparedness as identified under the National Fire Plan was not reached by the end of fiscal year 2001. They stated that the Forest Service and the Fish and Wildlife Service will reach this level in 2003 or early 2004. They also said that in order to maintain the full level of preparedness in 2003 and beyond, the funding level may need to increase to keep pace with inflation and new standards and requirements for crew safety, initial attack effectiveness, and direct and indirect management oversight and support such as salaries, aviation contracts, and facility maintenance. Even though they have received over $800 million to increase their fire- fighting capacity, the Forest Service and Interior have not yet identified the results they expect to achieve with these additional resources. It, therefore, will be difficult to determine the extent to which these additional personnel and equipment have increased the level of fire- fighting preparedness. Both the Forest Service and Interior recognize the need to develop methods for determining the impact of the hundreds of millions of dollars provided to increase fire-fighting capacity. To facilitate such accountability, both the Forest Service and Interior have developed performance measures. However, the measures do not focus on the results to be achieved and are not consistent among the agencies. The Forest Service’s performance measure is designed to provide information on the amount of personnel and equipment it has to respond to a fire. This information will only indicate the amount of resources the Forest Service is using to address its fire-fighting needs. It will not indicate whether the agency has improved the effectiveness of its fire fighting with the additional personnel and equipment. The Interior agencies, on the other hand, have a performance measure that focuses on the goals they expect to achieve with their fire-fighting resources. However, the performance measure they are using is not specifically tied to the increased fire-fighting resources provided under the National Fire Plan. Instead, the Interior agencies are using the same goal they had prior to receiving additional resources provided to implement the plan. Specifically, the Interior agencies’ objective is to contain 95 percent of all fires during initial attack. Even if the agencies’ performance measures were more results-oriented, they would only fulfill the requirements of the national fire policy if they were also consistent with each other. However, the measures are not consistent. The agencies were unable to provide us with a rationale for why the measures are not consistent. The Forest Service and Interior acknowledge that the development of a common set of results-oriented performance measures is critical to implementing the National Fire Plan’s fire-fighting preparedness objectives. They are now working together to develop a common set of wildland fire management performance measures that will be results- oriented, measurable, valid, and connected to the goals contained in the National Fire Plan. However, agency officials estimate that the planned completion date for developing and implementing these measures will be late in fiscal year 2004—more than 4 years after the increased funding was provided. Until the implementation of the National Fire Plan in 2001, both the Forest Service and the Interior agencies used a similar method to account for their fire-fighting personnel costs. However, beginning in fiscal year 2001, the Forest Service changed its accounting method for these costs. As a result, the agencies do not now use a consistent approach for collecting and reporting on fire-fighting costs, which makes budget cost comparisons and analyses more difficult. When the Forest Service prepares its annual budget for wildland fire management activities, the costs for personnel normally assigned to managerial, administrative, and other staff positions in the fire program are budgeted for in the “Wildland Fire Preparedness” account. Personnel in these categories are also frequently assigned to help fight wildland fires during the fire season. When these staff were assigned to a wildland fire prior to fiscal year 2001, the first 8 hours of their workday their base hours were charged to the preparedness account where the funds were originally budgeted. Any additional time spent working on wildland fires above their base hours was charged to the “Wildland Fire Suppression” account. However, starting in fiscal year 2001, the first year of the National Fire Plan, the Forest Service directed its personnel to charge all of their time to the suppression account when assigned to a wildland fire. According to the director of program and budget analysis, the Forest Service made the accounting change to better reflect the cost of wildland fire suppression.We have previously supported this type of accounting for personnel costs because it better tracks how these costs are actually incurred rather than as budgeted. The change will reduce costs charged to the Forest Service’s preparedness activities and increase costs charged to its suppression activities when compared with years past and with Interior’s accounting for its costs charged to similar activities. Because the Forest Service and Interior now use different methods of accounting for the cost of personnel assigned to wildland fires, it will now be much more difficult for the Congress and other decisionmakers to compare and analyze budget and cost information on the fire preparedness and suppression activities of the agencies at a national level. It is important to note that this accounting change will likely affect the Forest Service’s fire-fighting budgets in future years. Over time, this accounting change is likely to result in an overall increase in the cost of fighting wildland fires in the Forest Service. As more and more managerial and administrative personnel are assigned to fire suppression activities, the total costs for these activities will increase. Since suppression budgets are based on a 10-year rolling average of suppression costs, future suppression budgets will increase. This situation will also add to the difficulty of comparing and analyzing Forest Service and Interior fire activities over time. To effectively reduce the risk of catastrophic fire, the Forest Service and Interior are engaged in a long-term effort to reduce the large buildup of underbrush and other vegetative fuels that have accumulated to dangerous levels over the past several decades. This will ultimately reduce the number of large catastrophic fires that occur annually. However, until the Forest Service and Interior make progress in this area, it is even more critical to have adequate levels of personnel and equipment available to fight the intense, quick-spreading wildland fires that characterize current conditions in many areas. As the national fire plan and its underlying policy envision, these fire-fighting preparedness efforts will be much more effective if the agencies involved coordinate their efforts. The federal agencies have made progress in enhancing their fire-fighting capacity, but much work remains. Most fire management plans have yet to be updated so that they are consistent with current policy requirements. Until then, the coordinated approach to fire fighting called for in the National Fire Plan—having the agencies’ plans reach beyond individual administrative boundaries—will not be realized. Moreover, it may be 6 years before the agencies develop an integrated, more consistent planning and budget system that includes a single model that incorporates information from updated fire management plans. Without this system in place, the results of the models currently being used cannot be relied upon for effectively identifying fire-fighting personnel and equipment needs. While the agencies are developing these plans and a new planning and budgeting system, they cannot now measure the results achieved with their additional personnel and equipment. The agencies plan to have consistent, results-oriented performance measures in place by fiscal year 2004. Until then, the Congress and the public cannot readily compare results across agencies. Accountability would be further enhanced if both the Forest Service and the Interior agencies were using the same accounting methods for collecting and reporting on fire preparedness and fire suppression costs. Since they are not, Congress and the public have no consistent basis for comparing or analyzing these costs or associated budget requests. For the most part, the agencies acknowledge the need for improvements in each of these areas and have plans to address them. We are concerned, however, that these improvements may not occur expeditiously. It has been 7 years since establishment of the national fire policy where the agencies first acknowledged the need to address many of these issues. Nonetheless, they are only now—with the impetus provided by the National Fire Plan developing implementation plans and strategies for addressing them. Given this history and the added need to make certain that the substantial increase in funding that has come with the plan is used most efficiently, it is critical that the agencies be held accountable for following through on their plans for improvements. To make sure this occurs will require sustained monitoring and oversight by top agency officials and the Congress. If and when these improvements are completed, the agencies and the Congress will have a more credible basis for determining fire-fighting preparedness needs. In order to better meet the objectives of the National Fire Plan and improve the Forest Service’s and Interior’s ability to identify their fire- fighting preparedness needs, we recommend that the secretaries of agriculture and of the interior require the heads of their respective fire agencies to ensure that ongoing initiatives to address weaknesses in their preparedness efforts are fully implemented in a timely and consistent manner and across the agencies. In particular, the agencies need to ensure that fire management plans are completed expeditiously for all burnable acres and are consistent with the national fire policy; establish a single planning and budgeting system, applicable to all fire agencies, to determine fire-fighting personnel and equipment needs in accordance with up-to-date fire management plans; and develop performance measures identifying the results to be achieved with the personnel and equipment obtained with the additional funding provided under the National Fire Plan. We also recommend that the secretary of interior require the Interior agencies to change their method for allocating and reporting fire-fighting personnel costs—similar to the method now being used by the Forest Service to better reflect the cost of wildland fire suppression. We provided a draft of this report to the departments of agriculture and of the interior for review and comment. The departments provided a consolidated response to our report. They generally agreed with our recommendations to better identify their fire-fighting preparedness needs and provided additional information on the initiatives being taken. However, in commenting on our recommendation dealing with the development of performance measures to identify the results they are achieving under the National Fire Plan, the departments indicated they had already developed such measures. We disagree. The departments acknowledge elsewhere in their response that more work is needed to establish common performance measures and recent meetings with department officials have indicated that agreement on common measures has not yet been obtained. In commenting on this report, the departments expressed concerns that our report (1) did not give the departments enough credit for the progress they have made to increase their fire-fighting capacity under the National Fire Plan; (2) suggests that by simply updating fire management plans, fire managers will then be allowed to implement “let burn” decisions; and (3) infers that allowing more fires to burn naturally will automatically provide greater public and fire fighter safety. With respect to the first issue, we acknowledge the difficulty of the departments’ tasks under the National Fire Plan and, as noted in the report, recognize that the agencies have made progress in increasing their fire-fighting preparedness needs. We also agree it is important to look at results under the plan to place in proper perspective the issue of accountability in fire-fighting preparedness. However, 1 year after receiving $830 million in additional preparedness funding under the National Fire Plan in fiscal year 2001, the agencies are still putting out the same percentage of fires at initial attack. To us, it is reasonable to expect that with the substantial increase in preparedness funds and the increased resources that these funds allowed the agencies to acquire, the results achieved would have been greater than they were in the past year. Second, the departments stated that the full range of fire fighting options outlined in a local unit’s fire management plan, including a “let burn” option, can only be used when the overall land management plan provides for them. In this regard, they noted that in many cases land management plans have not been updated to reflect the full range of fire-fighting options as outlined in fire management plans. As a result, they contend that until the land management plans are updated, the fire management plans that are out of date cannot be revised to include all fire-fighting options, such as a “let burn” option. However, according to the 2001 update to the national fire policy, “the existence of obsolete land management plans should not be reason for failure to complete or update Fire Management Plans.” Third, the departments stated that our report appears to state that allowing more fires to burn naturally will automatically provide greater public and fire fighter safety. We disagree. Our report states that fire management plans provide fire managers with direction on the level of suppression needed and whether a fire should be allowed to burn as a natural event to regenerate ecosystems or reduce fuel loading in areas with large amounts of underbrush and other vegetative fuels. Where appropriate, we have incorporated the departments’ position on the different issues discussed in the report. The departments’ comments appear in appendix II. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to the secretary of agriculture; the secretary of the interior; the chief of the Forest Service; and the directors of the Bureau of Land Management, National Park Service, and Fish and Wildlife Service; deputy commissioner, Bureau of Indian Affairs; director, Office of Management and Budget; and other interested parties. We will make copies available to others upon request. This report will also be available on GAO’s home page at http://www.gao.gov/. If you or your staff have any questions about this report, please contact me at (202) 512-3841. The overall objective of this review was to determine how the federal land management agencies the Forest Service within the Department of Agriculture and the Bureau of Land Management, National Park Service, Fish and Wildlife Service, and Bureau of Indian Affairs within the Department of the Interior prepare for wildland fires while meeting key objectives of the National Fire Plan. A primary objective of the plan is to ensure an adequate level of fire-fighting preparedness for coming fire seasons. Specifically, to assess the effectiveness of the agencies’ efforts to determine the amount of fire-fighting personnel and equipment needed, we reviewed the extent to which the agencies adopted fire management plans as required by the national fire policy and the types and scope of computer planning models that the agencies use to determine their desired level of fire-fighting preparedness needs. We discussed these issues with officials at the five agencies’ headquarters offices and at the National Interagency Fire Center, in Boise, Idaho; BLM state and district offices, selected national forests, national parks, and state offices, and the National Academy of Public Administration. We also obtained, reviewed, and analyzed supporting documentation, such as laws, regulations, policies, and reports on wildland fires. Table 3 shows the sites we visited. We selected these sites to (1) meet with National Interagency Fire Center officials and the Interior agencies’ wildland fire managers who are located in Boise, Idaho, (2) obtain geographical dispersion of sites between eastern and western states, although more western sites were selected because more wildland fires occurring in those areas, and/or (3) to visit sites identified by agency officials as having recent fire history or as being good examples of fire-fighting preparedness. In addition, we selected more of the Forest Service’s sites than sites from other agencies because the Forest Service receives most of the fire-related funding. To determine the status of the agencies’ efforts to acquire additional fire- fighting resources, we contacted each of the five land management agencies to obtain information on the number of temporary and permanent positions acquired as of September 30, 2001, and compared this information with the number of positions needed to meet the agencies’ desired level of fire-fighting resources. We also obtained information from these agencies on the amount of fire-fighting equipment obtained with the increase in funding that they had identified as needed to carry out the objectives of the National Fire Plan. To determine the results that the agencies expected to achieve with their additional fire-fighting resources as determined through performance measures, we obtained documentation from the land management agencies and discussed with agency officials their management practices, including how they measure their progress in meeting fire-fighting preparedness objectives under the National Fire Plan. Finally, to determine whether the Forest Service and Interior were consistently reporting their fire-fighting personnel costs, we obtained information on the practices the agencies use to report their fire-fighting personnel costs. We compared any differences between the Forest Service and the Interior agencies on their practices in accounting for their fire- fighting preparedness funds. We conducted our work from February 2001 through January 2002 in accordance with generally accepted government auditing standards. In addition to those named above, Paul Bollea; Frank Kovalak; Paul Lacey; Carol Herrnstadt Shulman; and, in special memory, our colleague and friend, John Murphy made key contributions to this report.
Each year, fires on federal lands burn millions of acres and federal land management agencies spend hundreds of millions of dollars to fight them. Wildland fires also threaten communities adjacent to federal lands. The Departments of Agriculture (USDA) and the Interior, the lead federal agencies in fighting wildfires, jointly developed a long-term fire-fighting strategy in September 2000. Five federal land management agencies--the Forest Service, the Bureau of Land Management, the Bureau of Indian Affairs, the National Park Service, and the Fish and Wildlife Service--are working together to accomplish the plan's objectives. GAO found that the Forest Service and Interior have not effectively determined the amount of personnel and equipment needed to respond to and suppress wildland fires. Although the agencies have acquired considerably more personnel and equipment than were available in 2000, they have not acquired all of the resources needed to implement the new strategy. Despite having received substantial additional funding, the two agencies have not yet developed performance measures. The Forest Service simply measures the amount of fire-fighting resources it will be able to devote to fire fighting at each location, regardless of risk. Without results-oriented performance measures, it is difficult to hold the Forest Service accountable for the results it achieves. The Forest Service and the Interior agencies use different methods to report fire-fighting personnel costs--an approach that is not in keeping with policies requiring coordination and consistency across all aspects of fire management, including accounting for fire-related costs.
The Gramm-Leach-Bliley Act eliminated many of the legislative barriers to affiliations among banks, securities firms, and insurance companies. One of the expected benefits of expanded affiliation across industries was to provide financial institutions with greater access—by sharing information across affiliates—to a tremendous amount of nonpublic personal information obtained from customers through normal business transactions. This greater access to customer information is important to financial institutions wishing to diversify and may give customers better product information than they would have otherwise received. At the same time, there are increasing concerns about how financial institutions use and protect their customers’ personal information. Some financial industry observers have characterized the privacy provisions contained in GLBA as the most far-reaching set of privacy standards—pertaining to financial information and certain personal data—ever adopted by Congress. Title V of GLBA sets forth major privacy provisions under two subtitles, which apply to a wide range of financial institutions. Among other things, Subtitle A requires financial institutions to provide a notice to its customers on its privacy policies and practices and how information is disclosed to their affiliates and nonaffiliated third parties. Financial institutions are required to provide consumers the opportunity to “opt out” of having their nonpublic personal information shared with nonaffiliated third parties, with certain exceptions. Subtitle A also limits the ability of financial institutions to reuse and redisclose nonpublic personal information about consumers that is received from nonaffiliated financial institutions. Subtitle B of GLBA makes it a crime for persons to obtain, or attempt to obtain, or cause to be disclosed customer information from financial institutions by false or fraudulent means. Subtitle B provides for both criminal penalties and civil administrative remedies through FTC and federal banking regulatory enforcement. Subtitle B places the primary responsibility for enforcing the subtitle’s provisions with FTC. In addition, federal financial regulators are given administrative enforcement authority with respect to compliance by depository institutions under their jurisdiction. Under section 525 in Subtitle B, the banking regulators, NCUA, and SEC are required to review their regulations and guidelines and to make the appropriate revisions as necessary to deter and detect the unauthorized disclosure of customer financial information by false pretenses. Subtitle B contains five categories of exceptions to the prohibition on obtaining customer information by false pretenses. Specifically, there were exceptions for law enforcement agencies; financial institutions under specified circumstances, such as testing security procedures; insurance institutions for investigating insurance fraud; public data filed pursuant to the securities laws; and state-licensed private investigators involved in collecting child support judgments. Pretext calling is one common method used to fraudulently obtain nonpublic customer financial information from a financial institution. Pretext calling often involves an information broker—a company that obtains and sells financial information and other data about individual consumers—contacting a bank and pretending to be a customer who has forgotten an account number. Pretext callers may also pose as law enforcement agents, social workers, potential employers, and other figures of authority. The pretext caller then obtains detailed account data—often including exact balances and recent transactions—and sells that information to lawyers, collection agencies, or other interested parties. Perhaps more importantly, pretext calling can lead to “identity theft.” Generally, identity theft involves “stealing” another person’s personal identifying information—Social Security number, date of birth, mother’s maiden name, etc.—to fraudulently establish credit, run up debt, or take over existing financial accounts. The American Bankers Association (ABA) reported that its 1998 industry survey found that $3 out of $4 lost by a community bank to credit fraud was due to some form of identity theft.Consumers targeted by identity thieves typically do not know they have been victimized until the thieves fail to pay the bills or repay the loans. Identity thieves also buy account information from information brokers to engage in check and credit card fraud. A survey by the California Public Interest Research Group and Privacy Rights Clearinghouse found that fraudulent charges made on new and existing accounts in identity theft cases averaged $18,000. The Identity Theft and Assumption Deterrence Act of 1998 made identity theft a federal crime punishable, in most circumstances, by a maximum term of 15 years’ imprisonment, a fine, and criminal forfeiture of any personal property used or intended to be used to commit the offense. It is too soon to assess the efficacy and adequacy of the remedies provided for in Subtitle B of Title V of the Gramm-Leach-Bliley Act of 1999. As of March 31, 2001, federal regulatory and enforcement agencies had not taken any enforcement actions or prosecuted any cases under this law. Federal agencies have taken initial regulatory steps to ensure that financial institutions establish appropriate safeguards designed to protect customer information. Financial institutions are required to be in compliance with the new regulations by July 1, 2001. Lastly, we found that there are limited data available to indicate the prevalence of fraudulent access to financial information or pretext calling. As of March 31, 2001, FTC had initiated a number of nonpublic investigations targeting pretexters but had not fully prosecuted any cases for Subtitle B violations that prohibit obtaining customer financial information through fraudulent methods. Thus, FTC officials told us that it was too soon to assess the efficacy and adequacy of the remedies of this law because they had not had any experiences prosecuting under the statute. They stated that it would take at least 3 to 5 years before there would be sufficient case history to permit them to assess the usefulness of the statute. FTC officials stated that one key benefit of Subtitle B is that it clearly established pretext calling as a federal crime, making it easier for them to take enforcement actions against firms that use fraud to access financial information. Prior to the enactment of GLBA, FTC had undertaken one enforcement action against an information broker that was engaging in pretext calling. FTC pursued this case under its general statute, section 5(a) of the Federal Trade Commission Act, which provides that “unfair or deceptive acts or practices in or affecting commerce are declared unlawful.” One of the five FTC commissioners issued a dissenting statement because he felt pretext calling did not clearly violate FTC’s long-standing deception or unfairness standard. In June 2000, FTC settled the case, which prohibited the broker from engaging in pretext calling, and entered into a $200,000 settlement with the broker, which was subsequently suspended on the basis of the defendants’ inability to pay. FTC reported to Congress that its staff began a nonpublic investigation in June 2000 to test compliance with Subtitle B provisions that prohibit the use of fraudulent or deceptive means to obtain personal financial information. On January 31, 2001, FTC issued a press release regarding its “Operation Detect Pretext.” As part of this operation, FTC’s staff had conducted a “surf” of more than 1,000 Web sites and a review of more than 500 advertisements in the print media for firms that offered to conduct financial searches. FTC reported that it had identified approximately 200 firms that offered to obtain and sell asset or bank account information about consumers. FTC stated that it had sent notices to these 200 firms on January 26, 2001, advising them that their practices must comply with GLBA’s restrictions as well as other applicable federal laws, including the Fair Credit Reporting Act. According to the press release, the notices also informed the firms that FTC would continue to monitor Web sites and print media advertisements offering financial searches to ensure that they complied with GLBA and all other applicable federal laws. As part of Operation Detect Pretext, FTC published a consumer alert entitled Pretexting: Your Personal Information Revealed that offers tips to consumers on protecting their personal information. On April 18, 2001, FTC filed suit to halt the operations of three information brokers who used false pretenses, fraudulent statements, or impersonation to illegally obtain consumers’ confidential financial information, such as bank balances, and sell it. The Department of Justice had not prosecuted any cases involving pretext calling as of March 31, 2001. Department officials told us that in their experience, pretext calling is typically a component of a larger fraud scheme. They stated that they would normally prosecute under the larger fraud schemes, such as mail, wire, or bank fraud. They supported the new legislation and felt it provided them with sufficient enforcement authority to address the full criminal activity for related bank fraud cases. They said it was premature to comment on the adequacy of the criminal penalties provided in the act because they had no experience in prosecuting cases under this statute. They believed it would likely take several years before they would have adequate case history under this law to make any suggestions concerning the remedies contained in Subtitle B. Officials from the federal banking agencies, SEC, and NCUA all agreed that it was too soon to assess the efficacy and adequacy of the remedies in Subtitle B. None of these agencies had taken enforcement actions against financial institutions for violations of Subtitle B—which prohibits using fraudulent means to obtain personal financial information. Federal banking officials told us that they did not anticipate that there would be many circumstances in which they would use this law against a financial institution, unless an officer or employee of a financial institution was involved in the fraud. They stated that the financial institutions are typically one of the “victims” of pretext calling because the cost of the related crimes—credit card fraud or identity theft—is often borne by the financial institutions. They told us that they felt they had sufficient enforcement authority to take action against a bank officer or employee involved in fraudulent activities prior to the passage of Subtitle B and did not believe the statute gave them any additional enforcement authority. However, they supported the legislation because it explicitly makes fraudulent access to financial information a crime. Subtitle B of GLBA requires the federal banking agencies, NCUA, SEC, or self-regulatory organizations, as appropriate, to review their regulations and guidelines and prescribe such revisions as necessary “to ensure that financial institutions have policies, procedures, and controls in place to prevent the unauthorized disclosure of customer financial information and to deter and detect” fraudulent access to customer information. As of April 2001, the federal banking agencies and NCUA were coordinating their efforts to update the guidelines on pretext calling that they issued to financial institutions in the latter part of 1998 and early 1999. The earlier advisory was jointly prepared by the federal banking agencies, Federal Bureau of Investigation, U.S. Secret Service, Internal Revenue Service, and Postal Inspection Service. The advisory alerted institutions to the practice of pretext calling and warned institutions about the need to have strong controls in place to prevent the unauthorized disclosure of customer information. According to federal banking agency officials, they had discussed updating the guidelines to provide more information on identity theft and its relationship to pretext calling, but had not issued the updated guidelines as of April 2001. In addition, NCUA and the federal banking agencies issued guidelines for financial institutions relating to administrative, technical, and physical safeguards for customer records and information on January 30, 2001,and February 1, 2001. As discussed earlier, Subtitle A of GLBA requires the federal banking regulatory agencies, FTC, NCUA, SEC, and the state insurance regulators to establish standards for safeguarding customer information for the institutions that they regulate. Among other things, these standards are to establish safeguards to protect against unauthorized access to or use of such records or information that could result in substantial harm or inconvenience to any customer. For example, the guidelines issued by the banking agencies and NCUA require institutions to have controls designed to prevent employees from providing customer information to unauthorized individuals who may seek to obtain customer information through fraudulent means. Financial institutions under the jurisdiction of the federal banking agencies and NCUA are required to put in place by July 1, 2001, information security programs that satisfy the requirements of the guidelines. Officials at the bank regulatory agencies and NCUA told us that they plan to include the new guidelines for safeguarding customer financial information in their examination procedures. On June 22, 2000, SEC adopted regulations that require, among other things, brokers, dealers, investment companies, and registered investment advisors to adopt policies and procedures that address administrative, technical, and physical safeguards for the protection of customer records and information. These policies and procedures must be reasonably designed to (1) ensure the security and confidentiality of customer records and information, (2) protect against any anticipated threats or hazards to the security or integrity of customer records and information, and (3) protect against unauthorized access to or use of customer records or information that could result in substantial harm or inconvenience to any customer. SEC stated that it had conducted preliminary examinations of securities firms’ efforts to comply with these requirements and planned to include firms’ compliance with the regulations as a formal component of its examination program as of July 2001—the mandatory compliance date. SEC did not plan to develop additional guidance on pretext calling because it concluded that its regulation on safeguarding customer financial information would satisfy the agency guidance requirements of Subtitle B. FTC has begun the rulemaking process to establish safeguarding standards for customer information but had not issued its proposed regulations as of March 1, 2001. FTC officials told us that they expect to issue their proposed regulations by July 1, 2001—the date when financial institutions regulated by the federal banking agencies, NCUA, and SEC are required to have their safeguards in place. Subtitle B does not require state insurance regulators to review their regulations and guidance to ensure that financial institutions under their jurisdiction have policies, procedures, and controls in place to prevent the unauthorized disclosure of customer financial information. However, Subtitle A does require the state insurance regulators to establish standards for safeguarding customer financial information. As of March 1, 2001, the National Association of Insurance Commissioners (NAIC) was discussing how to approach these standards, either through issuing regulations, similar to SEC, or through general guidelines, similar to the federal banking regulators. In addition, the states were still in the process of drafting laws and regulations to be in compliance with the disclosure, information- sharing, and opt-out requirements contained in Subtitle A. Officials from the federal and state agencies whom we contacted were not aware of any available data sources that would indicate the prevalence of fraudulent access to financial information. Law enforcement officials told us that they do not collect such information. Justice officials stated that they track the number of offenses filed under the statute, but no matters had been brought forward as of March 1, 2001. Representatives from privacy or consumer groups also told us they were unaware of any statistics or databases that track the prevalence of pretexting. To obtain an indicator of the prevalence of pretext calling, we requested Suspicious Activity Report (SAR) data from the Financial Crimes Enforcement Network (FinCEN). Although banks are not obligated to report pretext-calling attempts, banks are generally required to file a SAR when it detects a known or suspected criminal violation of federal law or a suspicious transaction related to a money laundering activity or a violation of the Bank Secrecy Act. Banks are not required to file SARs until a certain dollar threshold has been met or exceeded. FinCEN officials told us that “false pretense”—their wording for pretext—is not part of the SAR data because it is not considered a criterion for filing a SAR, but it may be kept as secondary information contained in the narrative field as reported by the banks. At our request, in September 2000, FinCEN officials searched the narrative field of their database and found that only 3 of the 400,000 SARs in their database contained narrative regarding the use of false pretenses to obtain customer financial information. FinCEN subsequently advised us that recently completed research on SAR data for the calendar year 2000 indicated an increase in bank reporting on identity theft during the year. FinCEN noted that it is possible there may be an attendant increase in narrative reporting on attempted fraudulent access to financial information. Representatives of the Interagency Bank Fraud Working Group whom we contacted also discussed potentially expanding the narrative section of the SARs to capture information on pretext calling and identity theft. In our effort to identify indicators of the impact of Subtitle B, we reviewed information from FTC’s Identity Theft Clearinghouse Database and the federal financial regulators’ consumer complaint databases. According to FTC staff, victims of identity theft often typically did not know how their personal financial information was obtained, unless they had lost their wallets or family members or friends were involved. Therefore, it is unlikely these victims would be aware of whether someone had used pretexting to obtain their information. FTC reported that they had processed over 40,000 entries from consumers and victims of identity theft as of December 31, 2000. Of those entries, about 88 percent had no relationship with the identity theft suspect (about 12 percent had a personal relationship with the identity theft suspect). According to officials from the federal banking agencies, NCUA, and SEC, they received few consumer complaints related to financial privacy. They explained that they believed that consumers may be more likely to report potential cases of fraud to their banks or to law enforcement agencies first, rather than contacting the financial regulators. Thus, consumer complaints submitted to the federal regulators may not accurately reflect the prevalence of financial privacy violations. In addition, consumer complaint databases maintained by the regulators typically did not have a specific category to capture pretext-calling allegations, which is distinct from related incidents of fraud, such as credit card fraud. In October 2000, FDIC expanded its coding system to capture additional information related to financial privacy complaints. Pretexting is difficult to detect and is likely to be underreported. Many officials told us that pretexting was a common practice, especially among private investigators. According to many law enforcement officials we spoke with, crimes involving pretexting are particularly difficult to prove, and it was unlikely that pretexting would be reported or prosecuted as a single crime. If a pretexter is clever in his or her fraud scheme and successful in obtaining financial information, the financial institution is unaware that it was fooled into providing information. Often there is a time lag before victims of pretext calling suffer financial loss, and they may not be aware of how their financial information was obtained. According to law enforcement officials we spoke with, offenders using fraud to access financial information are generally detected as part of a larger crime, such as credit card, identity theft, or other bank fraud. An increase in related crimes, although not directly correlated to pretext calling, may be a possible indication of the prevalence of fraudulent access to financial information. For example, the number of SAR filings by the banks related to check fraud, debit and credit card fraud, false statement, and wire transfer fraud continued to increase from 1998 to 1999, according to the October 2000 report by the Bank Secrecy Act Advisory Group. As stated previously, more time and experience are needed to assess the efficacy and adequacy of the remedies contained in Subtitle B regarding fraudulent access to financial information. Therefore, we are not making any recommendations for additional legislation or regulatory actions. During our consultations with representatives from FTC, the federal banking agencies, NCUA, SEC, and federal and state enforcement agencies and insurance regulators, we obtained their views about the efficacy and adequacy of the subtitle’s other provisions. Some federal and state officials and representatives from consumer and privacy groups we contacted had some suggestions regarding possible changes to Subtitle B provisions, which are presented below. As discussed earlier, we did not evaluate how practical these suggestions were since we found no consensus on these issues. These suggestions reflect the continued concerns and issues raised by FTC staff and the privacy and consumer groups with whom we spoke. FTC staff and some state officials suggested that states be allowed to take enforcement actions for violations of Subtitle B provisions. According to these FTC staff and state officials, this would allow the states to augment the federal resources used to enforce compliance with the Subtitle B prohibition against pretext calling. Earlier versions of the House and Senate bills that were the basis for Subtitle B contained provisions that provided for state actions for injunctive relief or for recovering damages of not more than $1,000 per violation. These provisions were subsequently eliminated in the House and Conference versions of the legislation. FTC staff stated that the additional resources of the state attorneys general would be particularly helpful in enforcing compliance by some of the smaller information brokers that may otherwise escape detection or monitoring. According to some of the state officials we contacted, allowing state actions under the federal statute would increase the deterrent effects of the legislation. However, other state officials stated that they did not expect that providing states with enforcement authority under this statute would result in significantly greater enforcement activity due to resource limitations at the state enforcement level. Some of the consumer and privacy groups suggested that a private right of action provision be added to allow the consumers who were the victims of pretext calling to obtain financial compensation from the perpetrators of the violations. Like the state enforcement action provision, earlier House and Senate versions of Subtitle B contained provisions, which were subsequently eliminated, that would have allowed for civil lawsuits by individuals and financial institutions. These provisions recognized that pretext-calling victims will, in some instances, have a stronger incentive to proceed against an information broker or the broker’s client than a law enforcement agency or prosecutor operating with limited resources and forced to juggle competing priorities, particularly in those cases in which the amount of monetary damages is minimal. According to some of the state officials we contacted, the possibility of civil lawsuits would potentially increase the penalties for violating the statute’s provisions and, thus, help to deter such criminal activities. However, some officials did not agree with this suggestion and stated that a private right of action could also result in unintended consequences, such as frivolous lawsuits and overcrowded court dockets. There were differing suggestions made regarding the provision in the statute that allows private investigators to use pretext calling under certain conditions. The statute allows state-licensed private investigators to use pretext calling to collect child support from persons adjudged to have been delinquent by a federal or state court and if authorized by an order or judgment of a court of competent jurisdiction. The exception for state-licensed private investigators is nullified if prohibited by another federal or state law or regulation. Some consumer and privacy representatives stated that the exception was too broad and could result in potential abuse. On the other hand, one of the trade groups for private investigators wanted Congress to amend Subtitle B to allow the use of pretexting as an investigative tool to locate hidden assets when investigators contact judgment debtors or persons who have committed fraud. According to this trade group, one of the unintended consequences of Subtitle B is that it makes it easier for criminals and judgment debtors to hide their assets from lawful collection. We provided a draft of this report to the Chairman of the Federal Trade Commission, the Attorney General, the Secretary of the Treasury, the Chairman of the Federal Deposit Insurance Corporation, the Chairman of the Federal Reserve Board, the Comptroller of the Currency, the Director of the Office of Thrift Supervision, the Acting Chairman of the National Credit Union Administration, the Chair of the National Association of Insurance Commissioners, and the Acting Chairman of the Securities and Exchange Commission for their review and consultation. The Federal Trade Commission, Treasury, Federal Deposit Insurance Corporation, Federal Reserve Board, Office of the Comptroller of the Currency, NCUA, and SEC agreed with our overall report’s message and provided technical comments, which we incorporated into the appropriate sections of this report. The Office of Thrift Supervision, Justice, and NAIC agreed with our overall message and did not provide any comments on our report. In commenting on our draft report, the Financial Crimes Division of the U.S. Secret Service expressed concern over an increase in attacks directed at on-line service databases that ultimately contain personal financial information, such as credit card numbers, Social Security numbers, etc. The Secret Service also emphasized that they support any steps taken toward deterring individuals from attempting attacks directed at any institution’s infrastructure for the purposes of obtaining financial information. Although we acknowledge these concerns and their support on securing the privacy of financial information on-line, our study did not focus on on-line information security. We are sending copies of this report to the requesting congressional committees. We are also sending copies to the Honorable Robert Pitofsky, Chairman, Federal Trade Commission; the Honorable John Ashcroft, the Attorney General; the Honorable Paul H. O’Neill, Secretary of the Treasury; the Honorable Donna Tanoue, Chairman, the Federal Deposit Insurance Corporation; the Honorable Alan Greenspan, Chairman, the Federal Reserve Board of Governors; the Honorable John D. Hawke, Jr., Comptroller of the Currency; the Honorable Ellen Seidman, Director, the Office of Thrift Supervision; the Honorable Dennis Dollar, Acting Chairman, the National Credit Union Administration; the Honorable Kathleen Sebelius, Chair, the National Association of Insurance Commissioners; and the Honorable Laura S. Unger, Acting Chairman, the Securities and Exchange Commission. If you or your staff have any questions on this report, please contact me at (202) 512-8678 or Harry Medina at (415) 904-2000. Key contributors to this report were Debra R. Johnson, Nancy Eibeck, Shirley A. Jones, and Charles M. Johnson, Jr. To determine the efficacy and adequacy of the remedies provided by the Gramm-Leach-Bliley Act of 1999 (GLBA) in addressing attempts to obtain financial information by false pretenses, we interviewed officials from the Department of Justice, the Department of the Treasury, the Federal Deposit Insurance Corporation, the Federal Reserve Board, the Federal Trade Commission (FTC), the National Credit Union Administration, the Office of the Comptroller of the Currency, the Office of Thrift Supervision, and the Securities and Exchange Commission. Within Justice, we interviewed officials representing its Criminal and the Civil Divisions, the Federal Bureau of Investigation, and the Executive Office of the United States Attorneys. In addition, we talked with officials at seven U.S. attorney offices: (1) Eastern District of New York, (2) Southern District of New York, (3) Central District of California, (4) Northern District of California, (5) District of Massachusetts, (6) District of Minnesota, and (7) District of Colorado. The officials at the U.S. attorney offices we spoke with are primarily responsible for overseeing any federal prosecution of financial crimes that occur in their respective districts. We selected these offices because they were located in states that had been identified as being particularly active regarding consumer financial privacy. We also consulted with a number of state officials located in those same five states. Specifically, we interviewed staff from the state insurance regulatory agency and the attorney general’s office located in California, Colorado, Massachusetts, Minnesota, and New York. In addition, we interviewed representatives of the National Association of Insurance Commissioners. Within Treasury, we talked with officials from its Office of Financial Institutions, Office of Enforcement, Financial Crimes Enforcement Network, Internal Revenue Service, and U.S. Secret Service. We interviewed FTC staff from the Bureau of Consumer Protection who monitor compliance of financial institutions under FTC’s jurisdiction and FTC officials responsible for designing and implementing “Operation Pretext,” and we reviewed relevant FTC documents on FTC’s enforcement activities related to information brokers. We also examined the regulations and guidelines developed by the Federal Deposit Insurance Corporation, the Federal Reserve Board, FTC, the National Credit Union Administration, the Office of Comptroller of the Currency, the Office of Thrift Supervision, and the Securities and Exchange Commission related to their implementation of the privacy provisions of GLBA. In addition, we requested and reviewed data from the various agencies regarding enforcement activity and consumer complaints related to fraudulent access to financial information. To identify suggestions for additional legislation or regulatory actions with respect to fraudulent access to financial information, we obtained the viewpoints of the federal and state agencies’ officials we met with and interviewed a number of consumer and privacy groups that have been active in the area of financial privacy. Specifically, we interviewed representatives of the Center for Democracy and Technology, the Consumer Federation of America, Consumers Union, Eagle Forum, the Electronic Privacy Information Center, the Privacy Rights Clearinghouse, Privacy Times, the U.S. Public Interest Research Group, and the California Public Interest Research Group. In addition, we also talked with the American Bankers Association; the Association of Credit Bureaus; the North American Securities Administrators Association, Inc.; and the National Council of Investigation and Security Services, which represents the investigation and guard industry. We conducted our work in Washington, D.C.; San Francisco, CA; and New York City, NY, between August 2000 and April 2001, in accordance with generally accepted government auditing standards.
This report provides information on (1) the efficacy and adequacy of remedies provided by the Gramm-Leach-Bliley Act of 1999 in addressing attempts to obtain financial information by false pretenses and (2) suggestions for additional legislation or regulatory action to address threats to the privacy of financial information, from financial institutions. As of March 2001, federal regulatory and enforcement agencies had not taken any enforcement actions or prosecuted any cases under Subtitle B. The Federal Trade Commission (FTC) and the Department of Justice are still in the process of taking steps to ensure that the financial institutions that they regulate have reasonable controls to protect against fraudulent access to financial information. Although all of the federal regulators and privacy experts whom GAO contacted agreed that more time and experience are needed to determine if Subtitle B remedies adequately address fraudulent access to financial information, FTC staff and privacy experts suggested legislative changes to Subtitle B. GAO did not evaluate the potential impact or practicality of these suggestions because it found no consensus on these ideas.
Under the Housing Act of 1937, as amended, Congress created the federal public housing program to help communities provide housing for low- income families. Congress annually appropriates funds for the program, and HUD allocates these funds to the approximately 3,400 public housing authorities nationwide. Housing authorities are typically created under state law, and a locally appointed board of commissioners approves their decisions. HUD and the housing authorities have an annual contributions contract—a written contract under which HUD agrees to make payments to the housing authority and the housing authority agrees to administer the housing program in accordance with HUD regulations and requirements. In addition to competitively awarded HOPE VI grants, HUD provides housing authorities with several types of assistance, including operating subsidies to cover the difference between rent payments and operating expenses and capital funds to improve the physical condition of properties and upgrade the management and operation of existing public housing sites. HOPE VI is one of the few active federal housing production programs. By providing funds for a combination of capital improvements and community and supportive services, HOPE VI seeks to (1) improve the living environment for public housing residents of severely distressed public housing through the demolition, rehabilitation, reconfiguration, or replacement of obsolete public housing; (2) revitalize sites on which such public housing is located and contribute to the improvement of the surrounding neighborhood; (3) provide housing that will avoid or decrease the concentration of very low-income families; and (4) build sustainable communities. With the 165 grants awarded through fiscal year 2001, grantees planned, as of December 31, 2002, to demolish 78,265 public housing units and construct or rehabilitate 85,327 units, including 44,757 public housing units. HUD’s requirements for HOPE VI revitalization grants are laid out in each fiscal year’s NOFA and grant agreement. NOFAs announce the availability of funds and contain application requirements, threshold requirements, rating factors, and the application selection process. Grant agreements, which change each fiscal year, are executed between each grantee and HUD and specify the activities, key deadlines, and documentation that grantees must meet or complete. For example, the fiscal year 2001 grant agreement specified that the grantee must complete construction within 54 months of the date on which the grant agreement was executed. From fiscal years 1993 to 2001, HUD received 609 revitalization grant applications. HUD uses the same basic procedures each year to screen, review, and rank grant applications. When grant applications are received, they are screened to determine whether they meet the eligibility and threshold requirements in the NOFA. Next, reviewers rate the grant applications on the basis of the rating factors described in the NOFA and rank them in score order. Generally, a group of applications representing twice the amount of funds available is sent to a final review panel, which may include the Deputy Assistant Secretary for Public Housing Investments, the Assistant Secretary for Public and Indian Housing, and other senior HUD staff. The final review panel assigns a final score and recommends for selection the most highly rated competitive applications, subject to the amount of available funding. For a list of the 165 grants awarded through fiscal year 2001, see appendix II. Public housing authorities with revitalization grants can use a variety of other public and private funds to develop their HOPE VI sites. Public funding can come from federal, state, and local sources. For example, housing authorities can use funds raised through federal low-income housing tax credits. Under this program, states are authorized to allocate federal tax credits as an incentive to the private sector to develop rental housing for low-income households. Private sources can include mortgage financing and financial or in-kind contributions from nonprofit organizations. Developing public housing with a combination of public and private financing sources is known as mixed-finance development. HUD’s Office of Public Housing Investments, housed within the Office of Public and Indian Housing, manages the HOPE VI program. Grant managers within the Office of Public Housing Investments are primarily responsible for overseeing HOPE VI grants. They approve changes to the revitalization plan and coordinate the review of the community and supportive services plan that each grantee submits. In addition, grant managers track the status of grants by analyzing data on the following key activities: relocation of original residents, demolition of distressed units, new construction or rehabilitation, reoccupancy by some original residents, and occupancy of completed units. Public and Indian Housing staff located in HUD field offices also play a role in overseeing HOPE VI grants, including coordinating and reviewing construction inspections. According to our analysis, HUD has generally used a core of four rating factors as the basis for assessing HOPE VI revitalization grant applications. Although HUD’s fundamental factors have remained the same, the requirements that housing authorities must fulfill under each factor have become more stringent from year to year. Additionally, until the most recent NOFA, HUD had not eliminated applicants on the basis of poor performance on previously awarded grants. HUD’s Inspector General also has reported that HUD has not consistently followed its selection procedures that are established for each annual assessment. HUD has generally evaluated applications for HOPE VI revitalization grants on the basis of four core rating factors. Although other factors have been added and removed over time and the names of the factors have varied somewhat throughout the years, four key concepts—need, capacity, quality, and leveraging—have been used consistently to assess applications. As defined in the most recent NOFA, need should indicate the severity of distress at the targeted public housing site. Information provided under capacity is used to assess the experience of the applicant’s team in planning, implementing, and managing comparable physical development, financing, leveraging, and partnership activities. HUD determines quality by evaluating the overall quality of the plan, the likelihood of success, project readiness, and design. Finally, information provided under leveraging is used to assess the extent to which funds will be leveraged for physical development and community and supportive services, what other revitalization activities have been carried out in the targeted area in anticipation of the HOPE VI grant, and if there are physical development activities under way that will enhance the new HOPE VI site. For more information on the most recent NOFA, see appendix III. Although the core factors have remained the same, the information that housing authorities must submit and the requirements that they must fulfill under each factor have generally increased over time (see fig. 1). For example, although housing authorities have been required to provide basic statistics, such as crime and vacancy rates, to document severe distress or need since fiscal year 1993, housing authorities also were required, beginning in fiscal year 1999, to submit a certification from an independent engineer that the public housing targeted for revitalization met HUD criteria for severe distress. Since fiscal year 1993, applicants also were required to provide information on their own capacity to implement their plans. But, beginning in fiscal year 1997, housing authorities also were required to document the ability of their proposed partners to develop, construct, and manage the proposed activities. To receive the maximum amount of points for the quality rating factor in fiscal year 1996, applicants were required to submit several pieces of information, including budgets, a certification that the proposed activities could not be completed without HOPE VI funding, and a description of how the housing authority planned to maintain the proposed programs and policies over the long term. By fiscal year 2002, housing authorities additionally had to submit documentation that the revitalization plan would result in outside investment in the surrounding community and evidence that, if funded, work could commence immediately. To indicate that they could leverage funds, housing authorities were encouraged to submit evidence of outreach and support for the project in fiscal year 1995. However, by fiscal year 2000, applicants had to show that they would obtain at least $4 in leveraged funds for every HOPE VI dollar requested for development in order to receive the maximum amount of points under leveraging. According to HOPE VI officials, HUD has increased the types and quantity of information required each year in an effort to obtain information that makes it easier to rate and rank applications and allows the agency to make improved selection decisions. In addition, the agency has made some changes in an effort to make the application process easier for housing authorities. Finally, HOPE VI officials noted that the program’s annual appropriation legislation can change the requirements each year and that the NOFAs must be revised to reflect these changes. Although the changes have given HUD better information upon which to base selection decisions, some of the housing authority and public housing industry group officials that we interviewed expressed concerns about the changes in the application requirements that housing authorities must meet. According to these officials, such changes make it difficult for housing authorities to anticipate what HUD intends to emphasize and to make detailed revitalization plans until each NOFA is published. The officials also noted that it is challenging for previously denied applicants to determine how to revise their applications. Housing authorities and interest groups report that it generally costs $75,000 to $250,000 to prepare a HOPE VI grant application. The fiscal year 2002 NOFA was of particular concern to some of the housing authority officials and industry group representatives that we interviewed. According to these officials, the NOFA required housing authorities to conduct impractical up-front planning and to obtain commitments at an unrealistically early date. For example, an applicant had to certify that it had procured a developer for the first phase of construction by the application due date. Officials we interviewed stated that this requirement would be costly to the applicant, who at that point would have no guarantee of funding. Although HUD’s annual selection process had considered the performance of applicants who had received HOPE VI grants in prior years, it was not until the fiscal year 2002 NOFA that past program performance became a mandatory threshold requirement for an applicant to be eligible for a HOPE VI revitalization grant. Incorporating past performance—specifically, the demonstrated ability to efficiently manage projects—can help direct HOPE VI funds to where they can most effectively produce results. Starting in fiscal year 1995, an applicant’s score for capacity was partially based on the extent to which any previously awarded HOPE VI grants had progressed. In fiscal years 1993, 1996, and 1997, applicants were also required, under the capacity factor, to submit Public Housing Management Assessment Program scores, which were a measure of a housing authority’s performance in all major areas of management operations. HUD stopped requiring this information in fiscal year 1998, after the Public Housing Management Assessment Program was discontinued. The fiscal year 2002 NOFA was the first that stated that an applicant with one or more existing HOPE VI revitalization grants would be disqualified if one or more of those grants failed to meet certain performance requirements as required in the applicable HOPE VI revitalization grant agreement. During the years that past performance was a rating factor—rather than a threshold eligibility requirement—multiple HOPE VI revitalization grants were awarded to housing authorities that had made little progress in constructing new units under previous grants. For example, the Chicago Housing Authority was awarded grants in fiscal years 1998, 2000, and 2001, although construction, as of December 31, 2002, was 21 percent complete at the Cabrini-Green site (fiscal year 1994 grant); 26 percent complete at the Robert Taylor B site (fiscal year 1996 grant); 27 percent complete at the ABLA Brooks Extension site (fiscal year 1996 grant); and 0 percent complete at the Henry Horner site (fiscal year 1996 grant). Similarly, the Detroit Housing Commission has received three grants and constructed 25 percent of the units planned. In a June 2002 report to Congress, HUD acknowledged that it has done little to rectify the problems among low performers and has often awarded poorly performing housing authorities multiple grants despite low or no unit production, inadequate oversight, and capacity issues. HUD also acknowledged that awarding multiple grants to poor performers further strains the institutional and staff capacity of these public housing authorities, intensifying existing problems. Finally, HUD noted that it had initially awarded grants to large housing authorities for large-scale developments, without fully recognizing that most of the grantees included at-risk and troubled public housing authorities. Some of these large housing authorities were awarded multiple revitalization grants, and the burden of managing the grants resulted in slow planning, redevelopment, and construction. According to HUD, it elevated the importance of past performance in the fiscal year 2002 NOFA because it wanted to emphasize accountability and readiness. It determined that applicants that already had one or more HOPE VI revitalization grants should demonstrate the capability to manage them before HUD awarded them more funds. It also concluded that poor performers should not be rewarded with additional funding when other housing authorities possibly could implement the grants better. In annual reviews of the HOPE VI grant selection process, HUD’s Inspector General has found that the agency has not consistently followed its grant selection procedures for each year. For example, in an audit of the fiscal year 1996 grant award process, the Inspector General found that HUD revised its screening procedures to allow applicants to comply with only one of the two eligibility criteria in the NOFA. Under the revised screening procedures, HUD awarded $269 million to applicants that should have been ineligible for funding because they did not demonstrate compliance with the two criteria as specified in the NOFA. Similarly, when HUD encountered a defect in a fiscal year 1996 application, often the reviewers resolved the defect in a manner that improved the applicant’s application but did not always comply with the NOFA procedures for resolving application defects. The Inspector General concluded that, as a result, some applications that should have been ineligible for funding were inappropriately funded. Similarly, the Inspector General also has found that in both fiscal years 1998 and 1999 HUD did not fully or consistently implement key application review procedures. Specifically, the final review panel, and to a lesser degree the initial reviewers, did not always document their justifications for scoring and rating individual applications. For example, in its fiscal year 1998 audit, the Inspector General reviewed 24 applications and identified 6 on which the final review panel changed preliminary scores without providing adequate documentation or justification to support all the changes. The scoring changes resulted in 5 of the applicants obtaining funding and 1 losing funding. In its fiscal year 1999 audit, the Inspector General reviewed 25 applications and found that HUD’s final review panel had changed scores for 6 applications without providing adequate documentation or justification. The scoring changes resulted in 5 of the applicants obtaining funding. In response to these and other Inspector General criticisms of the HOPE VI grant selection process, HOPE VI officials told us that they follow their review procedures to the best of their ability, given the time constraints of the annual competition. Although the Inspector General generally has about 4 months to review the previous year’s applications, HOPE VI officials noted that they have shorter time frames—generally, 6 weeks. HUD officials also stated that they have made efforts to address the Inspector General’s concerns, including efforts to better screen applications. In its report on the fiscal year 1999 HOPE VI competition, the Inspector General determined that HUD had addressed issues in its fiscal year 1998 review, relating to the need to ensure that (1) each rejected applicant would be provided specific written notification as to why the application was not successful and (2) all evaluations were based on the facts presented in the applications. The status of work at HOPE VI sites varies, with construction completed at 15 of the 165 sites that received revitalization grants through fiscal year 2001. Overall, at least some units have been constructed at 99 of the 165 sites, and 47 percent of all HOPE VI funds have been expended. In general, more recently awarded grants are progressing more quickly than earlier grants. Nevertheless, the majority of grantees missed at least one of the deadlines in their grant agreements. For example, grantees did not submit the revitalization plan to HUD on time for 75 percent of the grants awarded through fiscal year 1999. Many factors affect the status of work at HOPE VI sites, including the development approach, housing authority management, and relationships with residents and the surrounding community. Our analysis of data from HUD’s HOPE VI reporting system shows that work status varies at HOPE VI sites. As of December 31, 2002, relocation was complete at 101 of the 165 sites, demolition at 87 sites, and construction at 15 sites. Reoccupancy—the return of some original residents to revitalized units—was complete at 37 sites, while occupancy was complete at 14 of the 165 sites. Grantees had demolished 57,772 units of severely distressed public housing and constructed or rehabilitated 23,109 units. Figure 2 shows the percentage of planned revitalization activities completed by each fiscal year’s grantees. Although construction was complete at only 15 sites as of December 31, 2002, construction was nearing completion at additional sites. As shown in figure 3, at least some units had been constructed at 99 of the 165 sites. Where construction was still ongoing, it was 50 percent or more complete at 40 sites and 75 percent or more complete at 25 sites. No units had been completed at 66 sites. Overall, 27 percent of the total planned units were complete as of December 31, 2002. In general, grantees with more recently awarded grants are completing activities more quickly than those with the earlier grants. The fiscal year 1993 grantees took an average of 31 months after execution of grant agreements to start construction. The fiscal year 1994 grantees took an average of 41 months. However, the 14 grantees awarded grants in fiscal year 1999 that have started construction did so an average of 16 months after grant agreement execution. Furthermore, the 9 fiscal year 2000 grantees that have started construction did so, on average, 10 months after grant agreement execution. According to HUD, there are several possible reasons for this improvement, which include that the later grantees may have more capacity than the earlier grantees, the applications submitted in later years were more fully developed to satisfy NOFA criteria, and HUD has placed greater emphasis on reporting and accountability. Overall, grantees have expended about $2.1 of the $4.5 billion (47 percent) in HOPE VI revitalization funds awarded. As expected, a greater percentage of the funds budgeted for planning and demolition has been expended than of the funds budgeted for construction and community and supportive services (see fig. 4). For example, 67 percent of all HOPE VI funds budgeted for demolition have been expended, while 42 percent of all HOPE VI funds budgeted for construction have been expended. The majority of grantees missed at least one of the deadlines established in their grant agreements. Grantees must meet three major deadlines according to their grant agreements: the submission of a revitalization plan to HUD, the submission of a community and supportive services plan to HUD, and completion of construction. Overall, for 75 percent of the grants awarded through fiscal year 1999, the grantees did not submit the revitalization plan to HUD on time. For 70 percent of the grants subject to a standard deadline for the submission of a community and supportive services plan, the grantees did not meet the deadline. Additionally, grantees completed construction within the deadline on only 3 of the 42 grants for which the time allowed for construction—54 months from grant execution for grants awarded since fiscal year 1996—had expired. For 9 of the 39 grants that missed their construction deadline, the grantees had not constructed any units as of December 31, 2002. HUD data show that the time it has taken grantees to submit key documents has shortened over the life of the program. For example, as shown in table 1, grantees have been taking less time to submit revitalization plans to HUD. On average, the fiscal year 1994 grantees took about 790 days after the execution of their grant agreements to submit a revitalization plan. By fiscal year 2000, the grantees took an average of 185 days after the execution of their grant agreements to submit a revitalization plan. Similarly, although there is no specific grant agreement deadline related to submitting mixed-finance proposals—documents that HUD must approve before mixed-finance construction can begin—the recent grantees have done so in less time than did earlier grantees. The average number of days between grant execution and submission of a mixed-finance proposal fell from 2,255 days for the fiscal year 1994 grantees to 508 days for the fiscal year 2000 grantees. HUD has taken steps to encourage adherence to deadlines. For instance, the agency notified grantees in March 2002 that, as part of HUD’s increased focus on readiness, 10 dates could no longer be revised in the HOPE VI reporting system as of June 30, 2002. The dates included planned completion of the revitalization plan, planned completion of a mixed- finance proposal, planned start of construction, and planned completion of construction. Prior to this decision, grantees had been allowed to adjust their planned dates when delays occurred, making it difficult for HUD to determine the extent of delays. In its fiscal year 2002 NOFA, HUD also stressed project readiness. For example, the NOFA required applicants to provide a certification stating either that they had procured a developer for the first phase of development by the application due date or that they would act as their own developer. Similarly, applicants that proposed off- site replacement housing were required to submit evidence of control of the proposed off-site locations. Our visits to the sites that were awarded revitalization grants in 1996 show that many factors—including the development approach, housing authority management, and relationships with residents and the community—can affect the status of work at a site. In its June 2002 report to Congress, HUD stated that a mixed-finance development approach might cause delays because housing authorities often lack staff with expertise in development and complex financing approaches. They must hire additional staff or outside consultants proficient in private-sector real estate construction, financing, and lending practices to put together financing and retain developers. For example, the redevelopment of Dalton Village in Charlotte, North Carolina, was delayed about 1 year due to the denial of its initial application for low-income housing tax credits. In addition, the Housing Authority of New Orleans decided to use tax increment financing to raise additional funds for its St. Thomas site. It took more than 2 years for the housing authority to get all of the approvals necessary. In contrast, the Chester Housing Authority was able to complete construction at Lamokin Village within 5 years of grant execution because it used only public housing funds, which did not require them to acquire additional expertise. Other aspects of the development approach, such as the type and location of planned revitalization efforts, also can affect status. For example, rehabilitation of existing buildings tends to take less time than construction of new ones. As of December 31, 2002, over half of the HOPE VI units scheduled for rehabilitation had been completed, while less than a quarter of the new planned units had been constructed. The Cuyahoga Metropolitan Housing Authority’s fiscal year 1996 grant involves both rehabilitation of existing units and construction of new units. As of April 2003, rehabilitation of 56 units was under way, whereas the construction of new units was not scheduled to begin until October 2004. Also, on-site construction tends to occur faster than off-site construction. As of December 31, 2002, 29 percent of on-site construction was complete, while 19 percent of off-site construction was complete. Grantees planning for off- site construction sometimes have to purchase the property or properties on which the units will be built. For example, the Housing Authority of the City of Pittsburgh plans to acquire numerous parcels of land in the community surrounding the Bedford Additions site and construct new off- site units prior to beginning construction on-site. Because acquiring the sites is taking longer than anticipated, the housing authority has yet to relocate residents and demolish the original site. For more examples of how development approaches can affect work status, see appendix IV. The extent to which revitalization plans were changed during the course of redevelopment also affects work status. The Housing Authority of the City of Atlanta’s original application for a fiscal year 1996 HOPE VI grant outlined a plan for 100 percent public housing at the Perry Homes site. Two years after the grant award, HUD conducted a site visit and determined that the site should include a wide range of units, including market-rate units. Due to these changes, a revitalization plan was not approved until October 2002. The Cuyahoga Metropolitan Housing Authority changed the plans for its Riverview site due to environmental problems. In contrast, the Housing Authority of Louisville, another fiscal year 1996 grantee, has not had to make any significant modifications to its revitalization plan for Cotter and Lang Homes, and over 60 percent of the 1,213 planned units were complete as of December 31, 2002. Several grantees we visited stated that the performance of housing authority management staff affected the status of their revitalization plans. For example, residents in Jacksonville and housing authority staff in Spartanburg stated that their fiscal year 1996 grants had progressed significantly, in part, because the executive director communicated well with residents, the housing authority board, and local community leaders. In contrast, the Cuyahoga Metropolitan Housing Authority was experiencing internal problems at the time its fiscal year 1996 grant was awarded. Its executive director was ultimately convicted for theft of public funds, mail fraud, and lying about a loan. A new executive director was hired in late 1998, and the housing authority was finally able to focus on the fiscal year 1996 HOPE VI grant in 1999, according to housing authority officials. In Detroit, the revitalization plans for Herman Gardens changed multiple times because there were several changes in executive leadership and each executive director had a different plan for the site. Because the Detroit Housing Commission had not submitted a formal revitalization plan for Herman Gardens, HUD notified the commission in March 2000 and March 2002 that it was in default of its grant agreement. The extent of support from residents and the local community also can affect the timing of progress at HOPE VI sites. For example, the Tucson Community Services Department, which serves as the city’s public housing authority, worked closely with its residents and the local community during the planning process for its fiscal year 1996 grant. Tucson did not submit its revitalization plan until a majority of the residents had approved it. In contrast, resident or community opposition delayed progress at several of the sites we visited. For instance, the Chicago Housing Authority’s plans for Henry Horner Homes were delayed 4 years by legal actions related to a resident lawsuit. Residents at San Francisco’s North Beach site did not want to relocate from the site during the redevelopment, which caused the redevelopment to take longer than it would have otherwise. Because the Housing Authority of New Orleans’s St. Thomas site is located in a historic district, local preservationists opposed the construction of a retail store at the site. In July 2002, a nonprofit organization filed a lawsuit against the housing authority for failing to comply with environmental and historic preservation laws. The case was dismissed in April 2003. See appendix IV for more information on each of the 20 sites we visited. HUD’s approval process can also affect the status of work at HOPE VI sites. Officials responsible for managing 12 of the 20 grants awarded in fiscal year 1996 told us that HUD’s approval process for key documents, such as the revitalization plan and mixed-finance proposals, was too slow. However, according to a HUD report, the agency’s approval process has been improving. For instance, HUD’s data show the average number of days from the submission of a mixed-finance proposal to approval was 185 days for the fiscal year 1996 grantees. For the fiscal year 1999 grantees, the average number of days between submission and approval of a mixed- finance proposal was 126 days. HUD grant managers located at HUD headquarters and in the field are primarily responsible for overseeing HOPE VI grants, but staff in HUD’s field offices also assist grant managers in monitoring grants. In particular, field office staff are to perform annual on-site monitoring reviews. However, by the end of 2002, HUD had not conducted any annual reviews for 8 out of the 20 grants awarded in fiscal year 1996. According to HUD, staffing limitations have constrained its ability to oversee grants. Additionally, despite grantees’ inability to meet key deadlines, HUD has not developed a formal enforcement policy, which is an important part of oversight. Both HUD headquarters and field office staff are responsible for overseeing HOPE VI revitalization grants. HUD has 30 grant managers that report directly to the Office of Public Housing Investments—17 located at HUD headquarters and 13 located in field offices. Grant managers are primarily responsible for overseeing HOPE VI grants and perform a number of duties, including tracking the overall status of the grant, reviewing and approving mixed-finance proposals, reviewing and approving all proposed changes to program schedules, and reviewing and approving procurement documents. According to HOPE VI officials, the main tool that grant managers use to oversee grants is the HOPE VI reporting system, which since 1998 has provided information on the status of each grant. (Grantee reporting existed before 1998, but not in the form of the quarterly reporting system currently used.) Grantees enter data into the Web-based system at the end of each quarter. According to the grant managers, the reports from the system enable them to track grant activity and deadline compliance. Office of Public and Indian Housing staff in HUD’s field offices also play a role in overseeing HOPE VI grants, but their responsibilities vary. Three field offices that contain grant managers—located in New York, New York; Miami, Florida; and Cleveland, Ohio—have signature authority, meaning that the office’s local Director of Public Housing can approve documents without approval from headquarters. Other field offices contain grant managers but do not have signature authority. However, most field offices do not have a grant manager, but rather have a HOPE VI coordinator, whose responsibilities include assisting grantees with preparing demolition applications, reviewing environmental assessments, and coordinating and reviewing inspections of HOPE VI construction sites performed by the U.S. Army Corps of Engineers. The field offices also are responsible for performing an annual on-site monitoring visit to each HOPE VI grant. Following this visit, the field office is to prepare a report for both the housing authority staff and the grant manager detailing grantee systems and controls in place and compliance with HOPE VI program requirements. The site visit reports also provide an assessment of the overall status of grant activities. According to various reports and HUD field staff, the limited number of grant managers, a shortage of field office staff, and confusion about the role of field offices have diminished the agency’s ability to oversee HOPE VI grants. As shown in figure 5, grant manager workload has been increasing since HUD last hired a large group of grant managers in 1998, but the workload remains below the previous level. As of fiscal year 2001, each grant manager was responsible for an average of about 6 grants totaling about $157 million in HOPE VI funding. In its June 2002 report to Congress, HUD stated that one factor contributing to delays at HOPE VI sites was limited HUD grant managers. Similarly, some of the grantees we visited stated that they believe grant manager workload contributed to the slow approval process previously discussed in this report. HUD reports that HOPE VI oversight also has been affected by a shortage of field office staff and confusion about the role of field offices. Our site visits showed that HUD field staff are not systematically performing the required annual reviews. Of the 20 revitalization grants awarded in fiscal year 1996, 8 had never had an annual review performed as of the end of 2002, and no grant had had an annual review performed each year since the grant award. Overall, only one in five of the required annual reviews were performed. However, the annual reviews that were performed did contain important findings. For example, several of the annual reviews performed for the fiscal year 1996 grantees noted that housing authorities were not following procurement policies and lacked proper documentation of resident relocations. From our interviews with field office managers, we determined that there are two reasons why annual reviews were not performed. First, many of the field office managers we interviewed stated that they simply did not have enough staff to get more involved in overseeing HOPE VI grants. For example, one field office manager told us that, because of staffing constraints, his office did not perform any HOPE VI oversight. Second, some field offices did not seem to understand their role in HOPE VI oversight. For instance, one office thought that the annual reviews were primarily the responsibility of the grant managers. Others stated that they had not performed the reviews because construction had not yet started at the sites in their jurisdiction or because they did not think they had the authority to monitor grants. The HUD Inspector General and the agency itself have reported that staffing shortages, particularly in the field, have resulted in a lack of program oversight. In a 1998 review of the HOPE VI program, the Inspector General stated that HUD had not been performing even the minimal monitoring requirements for the HOPE VI program in part due to understaffing in both headquarters and the field offices. As noted in that report, lack of monitoring led to grant implementation problems remaining unresolved. In addition, HUD’s most significant workforce planning activity to date—its Resource Estimation Allocation Process (REAP)—cited staffing shortages related to the HOPE VI program. Under REAP, HUD systematically estimated the number of employees needed to do its work, on the basis of current workload and operations. The final resource estimation report, which was issued in April 2001, noted that the Office of Public and Indian Housing needed to add approximately 38 full-time employees in the field to conduct tasks such as monitoring and providing assistance to HOPE VI grantees. The report also concluded that the Office of Public Housing Investments should more clearly articulate its own role and the role of field offices in the oversight of HOPE VI grants. Although the majority of grantees have missed key deadlines, HUD has not developed and provided to grantees an official HOPE VI enforcement policy, according to program officials. Instead, the agency determines if action should be taken against a grantee on a case-by-case basis. A clear enforcement policy could provide grantees with more certainty regarding the consequences of not meeting grant agreement deadlines. In a December 1999 memorandum, HUD’s Office of General Counsel noted that no statutory or program provisions required grantees to expend HOPE VI funds within a set period of time. Therefore, it concluded that HUD may grant extensions to time frames established in the grant agreements, thus avoiding the need to declare grantees that have missed deadlines to be in default of their grant agreements. In the absence of a formal enforcement policy, HUD has outlined in general terms its default policy in grant agreements. In each grant agreement, HUD describes several occurrences that might constitute a default by the grantee under the grant agreement, including a grantee’s failure to comply with the conditions and terms of its grant agreement. HUD provides written notice of all defaults and gives the grantee 30 days to remedy the default or to submit evidence to HUD that it is not in default. If the default cannot be remedied within 30 days, grantees have an additional 60 days to rectify the default situation. At that time, if the condition(s) noted in HUD’s initial letter to the grantee has not been resolved, HUD may require the grantee to revise its program schedule, management plan, or program budget. HUD also may restrict the grantee’s authority to draw down grant funds or require reimbursement by the grantee. HUD also reserves the right to appoint a receiver to carry out HOPE VI activities, reduce the amount of the grant award, or terminate the grant. According to HOPE VI officials, all grantees would have been considered in default of their grant agreements at some point in their grant process if HUD had not been flexible regarding time frames. For example, virtually all of the fiscal year 1996 grantees were allowed an extension to the date construction was to be completed, and some were allowed multiple extensions. The Chicago Housing Authority’s Henry Horner grant and the Housing Authority of the City of Atlanta’s Perry Homes grant received extensions for the execution of a general contractor’s agreement and for the date construction was to be completed. In 2000, the Housing Authority of the City of Pittsburgh’s grant for Bedford Additions received an extension until early 2003 to complete construction; in 2002, the authority received an additional extension to complete construction by July 2007. Although HUD has not developed a formal enforcement policy, it has issued default notices to grantees. It has generally issued these notices when there is no evidence of a formal and comprehensive approach to the grantee’s revitalization effort. As of March 2003, HUD had declared nine grants to be in default and issued warning notices regarding three other grants. According to program officials, HUD expects to increase the use of the default tool because a default letter tends to garner enough attention with the local media and political leaders to prompt action. However, HUD has never rescinded any HOPE VI funds, even when it has issued default letters. Because HUD does not have a formal enforcement policy, its issuance of default notices can be viewed as arbitrary. For example, in July 2000, HUD declared the Housing Authority of Baltimore City’s fiscal year 1996 grant for Hollander Ridge to be in default of its grant agreement on the basis of “failure to comply with the HOPE VI requirements or any other Federal, State or local laws, regulations or requirements applicable in implementing the Revitalization Plan.” The default letter also noted that, because the housing authority’s revitalization plan was no longer consistent with the requirements of a consent decree, the grant was deemed to be in default. In March 2000 and March 2002, HUD declared the Detroit Housing Commission’s fiscal year 1996 grant for Herman Gardens to be in default because the housing authority had not submitted a revitalization plan as required in its grant agreement. However, HUD has not issued default letters to other grantees who have not met grant agreement deadlines for completing construction. For example, even though no units have been completed at St. Thomas in New Orleans or Bedford Additions in Pittsburgh and, according to grant agreement deadlines, construction was to be completed by early 2002, neither fiscal year 1996 grant has been declared in default. HUD estimates that it has obligated about $51 million of the $63 million in HOPE VI funds that have been set aside for technical assistance, with the majority of this obligation funding services provided directly to grantees and program reporting. As shown in figure 6, the funding budgeted for technical assistance has fluctuated. Over the first 4 years of the program, funding ranged between $2.5 and $3.2 million, annually. In fiscal year 1998, funding increased to $10 million and consistently remained at or above that level until fiscal year 2002, when it decreased to $6.2 million. As shown in figure 7, HUD has obligated the majority of its technical assistance funding for services provided directly to grantees and program reporting. Of the $51 million that HUD estimates it has obligated to date, 55 percent has been obligated for technical assistance provided to grantees. For example, HUD assigns each grant an outside technical assistance provider to help the grantee develop its community and supportive services plan. In fiscal years 1996 to 2000, HUD assigned each new grant an expediter to assist the grantee with its HOPE VI plans. These expediters were private-sector experts in finance, real estate development, and community revitalization. Another major category of technical assistance has been program reporting. According to HOPE VI officials, HUD spends about $2.5 million annually on the HOPE VI reporting system. A contractor maintains the reporting system and staffs a help desk to respond to questions from grantees. The remaining technical assistance funding has been obligated for headquarters management assistance, such as consultants; site inspections performed by the U.S. Army Corps of Engineers; and staff training and travel. In recent years, HUD has eliminated some services previously provided to grantees. For example, in fiscal year 2001, HUD stopped providing expediters because, according to program officials, the practice had become too expensive. Currently, only at-risk grantees—grantees that are experiencing problems with their grants or do not have adequate capacity to manage their grants—are considered for technical assistance. According to HUD officials, HUD has decreased the amount of technical assistance it provides because the agency believes that grantees should be responsible for retaining and funding their own technical assistance. Figure 8 shows the percentage of technical assistance funds provided directly to grantees over the life of the program. HOPE VI is one of the few active federal housing production programs and is supposed to deliver almost 45,000 units of rehabilitated or new public housing. During these tight budgetary times, when Congress faces difficult choices in deciding how to provide affordable housing, it is increasingly important that federal housing programs produce results. After 10 years of the HOPE VI program, construction has been completed at 15 of 165 sites. However, work is proceeding more quickly at sites financed by more recently awarded grants. The HOPE VI program has incorporated measures to increase efficiency—in part attributable to HUD’s requesting more information from grant applicants and a renewed emphasis on meeting deadlines. In addition, the emphasis on performance measures, such as HUD’s incorporation of past performance as an eligibility requirement in the fiscal year 2002 NOFA, should help direct HOPE VI funds to where they can most effectively produce results. However, the HOPE VI program could be improved further. By emphasizing the need for regular grant oversight and review and improving and clarifying the lines of communication between headquarters and the field offices, HUD can eliminate existing confusion about staff roles, build a consistent record of site reviews and oversight, and improve communications with grantees to facilitate progress on grants. Since the HOPE VI grant process involves both HUD and public housing authorities, HUD can further improve the efficiency of the grant program and help achieve its goal of revitalizing public housing by holding grantees accountable for performance, particularly in the areas of meeting deadlines and producing deliverables. The HOPE VI program, as it is currently set up, does not have a clear and consistent system for determining if grantees are not in compliance with grant requirements, nor does it offer clear incentives for grantees to change behavior or correct undesirable conditions. To improve its selection and oversight of HOPE VI grants, we recommend that the Secretary of Housing and Urban Development continue to include past performance as an eligibility requirement in each year’s notice of funding availability; clarify the role of HUD field offices in HOPE VI oversight and ensure that the offices conduct required annual reviews of HOPE VI grants; and develop a formal, written enforcement policy to hold public housing authorities accountable for the status of their grants. We provided a draft of this report to HUD for its review and comment. In a letter from the Assistant Secretary for Public and Indian Housing (see app. V), HUD stated that it found the report to be fair and accurate in its assessment of the management of the program. HUD also agreed with our three recommendations. Specifically, it stated that it would take action to incorporate past performance as an eligibility criterion in the fiscal year 2003 HOPE VI Revitalization NOFA. Regarding the recommendation to develop a formal enforcement policy, HUD stated that it regards the development of management tools such as the locked checkpoint system described in this report to be a key step in the establishment of a formalized enforcement policy and will endeavor to institute other responsive measures. Additionally, HUD provided clarifications on several technical points, which have been included in this report as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 10 days after the date of this report. At that time, we will send copies of this report to the Chairman, Subcommittee on Housing and Transportation, Senate Committee on Banking, Housing, and Urban Affairs; the Chairman and Ranking Minority Member, Senate Committee on Banking, Housing, and Urban Affairs; the Chairman and Ranking Minority Member, Subcommittee on Housing and Community Opportunity, House Committee on Financial Services; and the Chairman and Ranking Minority Member, House Committee on Financial Services. We will also send copies to the Secretary of Housing and Urban Development and the Director of the Office of Management and Budget. We will make copies available to others upon request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. Please call me at (202) 512-8678 if you or your staff have any questions about this report. Key contributors to this report are listed in appendix VI. Our objectives were to examine (1) the Department of Housing and Urban Development’s (HUD) process for assessing HOPE VI revitalization grant applications and for selecting grantees, (2) the status of work at sites for which grants have been awarded and compliance with grant agreement deadlines, (3) HUD’s oversight of HOPE VI grants, and (4) the amount of program funds that HUD has budgeted for technical assistance and the types of technical assistance it has provided. To accomplish these objectives, we analyzed the data contained in HUD’s HOPE VI reporting system on the 165 sites that received revitalization grants in fiscal years 1993 through 2001 and visited 20 sites in 18 cities. We selected these 20 sites because they received HOPE VI revitalization grants in fiscal year 1996, which was the first year that grants were subject to a standard construction deadline. Using the 1996 grants also allowed us to assess whether grantees had met their deadlines, which had passed for the majority of the grantees by the time we began our site visits. In addition, we interviewed the HUD headquarters officials responsible for administering the HOPE VI program. To determine the criteria that HUD uses to assess HOPE VI revitalization grant applications, we analyzed each year’s notice of funding availability (NOFA). Specifically, we examined the rating factors used each year to determine if there were any similarities between the different NOFAs. We also analyzed the information that housing authorities were required to submit for selected rating factors and identified changes in these requirements over time. To determine how HUD has followed its grant selection procedures, we obtained and reviewed HUD Office of Inspector General reports on the HOPE VI grant selection process for fiscal years 1996 and 1998 to 2001. Finally, we interviewed public housing industry groups—the Council of Large Public Housing Authorities, the Public Housing Authorities Directors Association, and the National Association of Housing and Redevelopment Officials—regarding the grant selection process. To determine the status of work at sites for which grants have been awarded, we obtained and analyzed information from HUD’s HOPE VI reporting system. Specifically, we obtained data as of December 31, 2002, for the 165 revitalization grants awarded through fiscal year 2001. We used these data to determine the status of relocation, demolition, construction, reoccupancy, and occupancy and the amount of expended HOPE VI funds. For each of the 1996 grants, we interviewed housing authority and HUD officials to determine the status of each grant and the factors affecting that status. To determine the extent to which grantees have met grant agreement deadlines, we obtained and analyzed each year’s grant agreement. We then used milestone data from HUD’s HOPE VI reporting system to determine the extent to which grantees had met the deadlines in their grant agreements. To assess the reliability of the data in HUD’s HOPE VI reporting system, we interviewed the officials that manage the system; reviewed information about the system, including the user guide, data dictionary, and steps taken to ensure the quality of these data; and performed electronic testing to detect obvious errors in completeness and reasonableness. We determined that these data were sufficiently reliable for the purposes of this report. To identify how HUD oversees HOPE VI grants, we obtained and analyzed HUD’s HOPE VI monitoring guidance and interviewed program officials. We obtained and analyzed information on the number of grants and grant managers at the end of each fiscal year to determine grant manager workload. During each of our site visits, we interviewed housing authority staff regarding HUD’s oversight of their grants. We also obtained and analyzed copies of the annual reviews performed for the 1996 grants and interviewed HUD field office staff regarding their role in HOPE VI oversight. Finally, we reviewed HUD Inspector General reports on the HOPE VI program and HUD’s final report on its Resource Estimation and Allocation Process. To determine how much HUD has budgeted for technical assistance, we reviewed information provided by HUD on the total amount budgeted each fiscal year for technical assistance. To determine the types of technical assistance HUD has provided, we obtained and analyzed data on the major types of technical assistance provided with each fiscal year’s budget. The data HUD provided were estimates of the amounts it had obligated for technical assistance over the life of the program. We also interviewed program officials regarding the types of technical assistance provided and 1996 grantees regarding the types of technical assistance they received from HUD. We performed our work from November 2001 through April 2003 in accordance with generally accepted government auditing standards. In fiscal years 1993 to 2001, HUD awarded 165 revitalization grants to 98 public housing authorities (see table 2). Nearly half of all of the HOPE VI revitalization grant funds awarded have been granted to 20 housing authorities. Within this group of housing authorities, 8 have received 4 or more revitalization grants: the Housing Authority of the City of Atlanta, the Housing Authority of Baltimore City, the Chicago Housing Authority, the Housing Authority of the City of Oakland, the District of Columbia Housing Authority, the Philadelphia Housing Authority, the Seattle Housing Authority, and the City and County of San Francisco Housing Authority. The Chicago Housing Authority has been awarded 8 HOPE VI revitalization grants, more than any other housing authority. The Housing Authority of Baltimore City follows with 6 revitalization grants. The fiscal year 2002 NOFA for the HOPE VI program explained the process that HUD would use to screen and score applications. It stated that HUD would first screen applications to determine if they met threshold requirements—requirements that must be met in order for a HOPE VI revitalization grant application to be considered for funding. The NOFA also stated that if the application failed to meet any one of these thresholds, HUD would not rate or rank the application. The NOFA contained 28 threshold requirements, for which applicants had to attest or document compliance, including certification signed by an engineer or architect that the targeted public housing project meets the definition of severe physical distress and certification either that the applicant had procured a developer for the first phase by the application deadline or that it would act as its own developer. Additionally, an applicant that had one or more existing HOPE VI revitalization grants would be disqualified if one or more of those grants failed to meet the performance requirements described in the NOFA; applications that included a proposal to develop market-rate housing had to include a preliminary market assessment letter. If an application met all of the threshold requirements, HUD would rate it using the rating factors outlined in the NOFA. As shown in table 3, the 2002 NOFA listed nine rating factors, some of which comprised various subfactors. An application could receive a maximum of 114 points. Between January and October 2002, we visited the 18 housing authorities that were awarded HOPE VI revitalization grants in fiscal year 1996. For each of the 20 sites that were awarded grants that year, we describe below background information on the conditions at the original site for which the grant was awarded, the housing authority’s revitalization and community and supportive services (CSS) plans for the site, the status of those plans as of March 2003, and the factors that affected the status. We also include a time line and photographs for each site. Because the site summaries incorporate a number of program-specific and technical terms, we have included a glossary at the end of this report. As figure 9 shows, the Chicago Housing Authority was awarded a $24.5 million HOPE VI revitalization grant for the Brooks Extension portion of ABLA Homes in October 1996. Relocation and demolition have been completed at the ABLA Brooks Extension site, but the new construction has not yet begun. The Chicago Housing Authority’s scattered site program, which includes the development of any nonelderly public housing, has been under judicial receivership since 1987. The housing authority is in the midst of implementing a 10-year transformation plan, which is a $1.5 billion blueprint for rebuilding or rehabilitating 25,000 units of public housing— enough for every leaseholder as of October 1999—and transforming isolated public housing sites into mixed-income communities. The housing authority was awarded another HOPE VI revitalization grant for ABLA in fiscal year 1998 and also has received revitalization grants for the following sites: Cabrini-Green (fiscal year 1994), Henry Horner (fiscal year 1996), Robert Taylor (fiscal years 1996 and 2001), Madden/Wells/Darrow (fiscal year 2000), and Rockwell Gardens (fiscal year 2001). The five sites that comprise ABLA Homes had more than 3,500 original units. Three of the five ABLA sites were included in the authority’s fiscal year 1996 revitalization plans. Brooks Extension, the focus of the fiscal year 1996 revitalization grant, was completed in 1961 and consisted of three, 16-story buildings containing 453 units. Robert Brooks Homes was completed in 1943 and contained 834 units. Loomis Courts—a project- based Section 8 development—was completed in 1953 and contained 126 units. The density at ABLA was 37.33 units per acre, as compared with Chicago’s average density of 28 units per acre. The buildings at ABLA suffered from significant structural deficiencies as a result of age, weathering, and the lack of proper maintenance. A central heating plant, located at the Jane Addams site, provides the heat for the complex. This system is inadequate, and regulating the amount of heat for each unit has been a problem. The Chicago Housing Authority was awarded a fiscal year 1995 HOPE VI planning grant totaling $400,000 for ABLA and two other sites. In addition to the $24.5 million HOPE VI revitalization grant, the Chicago Housing Authority was awarded four HOPE VI demolition grants totaling $2.5 million for Brooks Extension and Robert Brooks Homes. The total budget for the renovation of Brooks Extension, Robert Brooks Homes, and Loomis Courts is $186 million and includes other public housing funds, equity from low-income housing tax credits, and tax increment financing. The revitalization plans call for the rehabilitation of 330 public housing units at Robert Brooks Homes; the construction of 777 new units at Brooks Extension (336 public housing units, 90 tax credit units, and 351 homeownership units); and the rehabilitation of 126 subsidized units at Loomis Courts. A 57,000-square-foot community center to be funded by the city is also part of the plans. Of the $24.5 million revitalization grant, the housing authority plans to set aside $3.6 million for community and supportive services. The community and supportive services plan for ABLA, which was approved in January 2002, focuses on employment, education, health, community building, and pilot programs. In addition to special programs funded by the HOPE VI grant, the housing authority plans to implement its service connector system at ABLA. The service connector system will help residents access services through a system of outreach, assessment, referral, and follow-up. The rehabilitation at Robert Brooks Homes has been completed. The reconstruction of 132 units was completed in 1998, and the reconstruction of the remaining 198 units was completed in 2000. Brooks Extension has been demolished (see fig. 9). The housing authority selected a developer for the entire ABLA development area in December 2002. Construction on the new units at Brooks Extension is expected to start in March 2004. The housing authority has hired a nonprofit organization to serve as ABLA’s service connector, and the program has been in operation since August 2001. A consultant has also been hired to implement the community and supportive services plan, including facilitating task forces on employment, education, and health. The ABLA revitalization has been affected by the need for the revitalization plans to comply with the Gautreaux consent decree. In 1966, African American residents of Chicago public housing filed suit against the Chicago Housing Authority for creating a segregated public housing system. In response, the court issued a judgment that prohibits the housing authority from constructing any new public housing in a neighborhood in which more than 30 percent of the occupants are minorities (limited areas) unless it develops an equal number of units in neighborhoods where less than 30 percent are minorities (general areas). In 1987, the court appointed a receiver for the housing authority’s scattered site program, including the development of nonelderly public housing. In the case of ABLA, the receiver and the housing authority had to show the court that, while ABLA was currently in a limited area, the area was going to be revitalized by HOPE VI. In June 1998, the court approved the housing authority’s request to designate ABLA a revitalizing area, thus allowing the development of new nonelderly public housing at the site without requiring an equal number of units to be built in a general area. According to a housing authority official, site planning was progressing at the Brooks Extension site until the housing authority applied, in 1997, for a HOPE VI revitalization grant for the Grace Abbott Homes portion of ABLA. HUD rejected the application, stating that the housing authority needed to develop plans for the entire ABLA site and establish better relationships with the city and the receiver. In 1998, the housing authority submitted a new application that covered all of ABLA and showed that it had worked closely with the city and receiver. While the housing authority was preparing this application, work at Brooks Extension stopped. HUD ultimately awarded the housing authority a fiscal year 1998 grant for the portions of ABLA not covered by the fiscal year 1996 grant. Management changes at the housing authority have also affected implementation of the grant, according to a housing authority official. After placing the housing authority under administrative receivership for approximately 4 years, HUD returned control of the housing authority to Chicago in May 1999. During the reorganization that occurred after the city resumed control, decisions were delayed. For example, the housing authority’s negotiations with the program manager selected for ABLA were delayed, in part, because the agency had just regained control of its operations and was developing an overall plan for transformation. According to a housing authority official, the receiver raised some legal issues that slowed progress at the ABLA site. HOPE VI revitalization grants are typically awarded to housing authorities. However, under the Gautreaux case, the receiver believed that the two ABLA grants should be split so that the funds for “hard” construction costs were awarded to the receiver, while the funds for social services were awarded to the housing authority. It took almost 2 years to settle this issue. In October 2000, the grants were split between the receiver and the housing authority. The only funds that the housing authority controls are funds for demolition, relocation, and community and supportive services. The housing authority had to issue two requests for proposals before selecting a developer. The first request for proposals to develop Brooks Extension was issued in November 2001, and the authority received three responses. The housing authority did not think that the respondents had sufficient capacity; therefore, it decided to issue another request for proposals to develop the entire ABLA site. The second request for proposals was issued in June 2002, and a developer was selected in December 2002. The New York City Housing Authority is using $67.7 million in HOPE VI revitalization grant funds to renovate Arverne and Edgemere Houses. Some of these revitalization funds were originally awarded to another site, Beach 41st Street Houses, and transferred to Edgemere in December 1996 (see fig. 10). All three sites are in Far Rockaway, a peninsula on the southern edge of Queens, south of Jamaica Bay and Kennedy Airport. The housing authority expects to complete the rehabilitation of Arverne and Edgemere by the end of 2004. In addition to the Arverne/Edgemere grant, the authority is overseeing another HOPE VI revitalization grant awarded in fiscal year 1998 for Prospect Plaza. The New York City Housing Authority received a $400,000 planning grant for the Arverne and Edgemere sites in fiscal year 1995. In 1996, the authority was awarded a revitalization grant for Arverne, and HUD transferred the revitalization grant originally awarded to Beach 41st Street Houses to Edgemere. The funding was transferred from Beach 41st Street after an impasse over the residents’ role in the planning process could not be overcome. The Arverne site, with 418 units, was completed in 1951; the Edgemere site, with 1,395 units, was completed in 1961 (see fig. 10). Although soundly constructed, they were in need of significant modernization and improvement. The area surrounding Arverne/Edgemere lacks essential retail services and adequate recreation and community space. In addition, the high density and current configuration of the buildings have contributed to vandalism and other criminal activity. Joblessness and low educational achievement among residents further weaken the community. Though situated in an attractive locale, between Jamaica Bay and the Atlantic Ocean, the community is extremely isolated with limited transportation links to other parts of New York City. The total projected budget for the renovation of Arverne and Edgemere is $233 million, which includes other public housing funds, city funds, and private funds. The revitalization plans for Arverne/Edgemere, renamed Ocean Bay Apartments, call for the modernization of 1,803 apartments, including lobby and facade improvements and site improvements such as upgrading infrastructure and landscaping. The plans also include the construction of a recreational facility, the expansion of the existing community center and day-care center, and the off-site construction of a health and education center and two retail centers. Of the $67.7 million in revitalization grant funds, the housing authority has budgeted $6.8 million for community and supportive services. The community and supportive services plan, which was approved in May 1999, focuses on case management, training, and self-sufficiency programs. Because the majority of residents chose to remain on-site during the renovation, only 211 residents were temporarily relocated, with the majority of households relocating to vacant units within the development. The renovation is being done in phases. For example, all of the asbestos was removed and electrical work completed before the kitchens and bathrooms were renovated. As of March 2003, 79 percent of the interior modernization work at Arverne and 85 percent of the interior modernization work at Edgemere was complete. The housing authority estimates that all of the apartment modernization work will be completed by June 2003. Under the revised revitalization plan, the community center will now be combined with the new recreational facility to reduce the overall costs of the plan. This work is under design and is expected to bid fall 2003. Also, the day-care center will be upgraded and expanded to create a state of the art facility with expanded capacity. The day-care center expansion design documents are completed. Community and supportive services are being offered to residents and other community residents. In November 1999, the housing authority opened a Family Resource Center where it administers various training and self-sufficiency programs for the residents. Already operating are the computer lab (see fig. 10), after-school program, and job training classes. A popular project has been the computer incentive program that provides a personal computer system to residents who either work 96 hours volunteering on HOPE VI recruiting and other HOPE VI activities or who participate in a HOPE VI training program. The authority also has contracted with Goodwill Industries to provide case management, counseling, and job preparation, placement, and retention services. To sustain community and supportive services after the expiration of the HOPE VI grant, the authority has created the Ocean Bay Community Development Corporation. Resident opposition to demolition was one of the issues that led to the impasse at Beach 41st Street Houses. After HUD transferred the HOPE VI funds from Beach 41st Street to Edgemere in December 1996, the housing authority again included demolition in the plans for Edgemere’s redevelopment. The housing authority determined that the best way to meet the demolition requirement would be to remove some top floors from each of three, nine-story buildings, thereby eliminating about 100 units. Subsequently, the housing authority withdrew this plan and proposed to convert dwelling units on the first floor to space for commercial and community services. This approach would also have removed about 100 units. The issue became moot when Congress, in the fiscal year 1998 appropriations act for the departments of Veterans Affairs and Housing and Urban Development and independent agencies, gave the New York City Housing Authority the option of not following any HOPE VI demolition requirements, and the housing authority abandoned the plans for demolishing the 100 units. It took almost 18 months to get the revitalization plan for Arverne/Edgemere Houses approved. The housing authority first submitted a revitalization plan to HUD in June 1997. After HUD returned the plan with comments for the housing authority to address, the housing authority submitted a revised plan in February 1998. The housing authority then went back and forth with HUD on changes to the plan. According to housing authority officials, the primary point of contention was the types of economic development activities upon which HOPE VI funds could be spent. HUD finally approved the housing authority’s revised plan in November 1999. The effects of September 11, 2001, have also posed challenges for the redevelopment of Arverne and Edgemere. Some of the housing authority’s HOPE VI records were destroyed and had to be recreated. Additionally, housing authority officials estimated that costs for one portion of the project had escalated from $22 million to $30 million over the life of the project—due, in part, to the labor force and materials moving downtown after September 11. Overall, the housing authority estimated that the Arverne/Edgemere project was delayed 6 months because of the September 11 attack. The Housing Authority of the City of Pittsburgh was awarded a $26.6 million HOPE VI revitalization grant for Bedford Additions in October 1996, as shown in figure 11. Off-site construction began in September 2002, and relocation and demolition have not yet occurred. The authority was previously awarded HOPE VI revitalization grants for Allequippa Terrace (fiscal year 1993) and Manchester (fiscal year 1995). Bedford Additions, part of the larger Bedford Dwellings, was constructed in 1954 and contains 460 units, the majority of which are in three-story, walk-up buildings (see fig. 11). It is located in the Hill District, a neighborhood offering access to many job centers. Many of the buildings at Bedford Additions had leaky roofs, cracks in the walls, and outdated mechanical systems that had not been well-maintained. Also, 72 percent of the families in its census tract were earning incomes below the poverty level. The housing authority was awarded a $395,700 HOPE VI planning grant for Bedford Dwellings and three other sites in fiscal year 1995. The total estimated budget for the revitalization is about $102 million and includes other public housing funds and equity from low-income housing tax credits. The revitalization plans call for construction of a two-story, 12,000-square-foot community center; construction of 75 off-site homeownership units and 365 off-site rental units (phases one and two); and construction of 45 on-site homeownership units and 175 on-site rental units (phase three). Of the 660 total units planned, 220 will be replacement public housing units. In addition, up to 40 of the homeownership units will be made affordable for public housing residents. The off-site units will be constructed first, and then the existing on-site units will be demolished and new units will replace them. Of the HOPE VI funds, the housing authority has budgeted about $5.1 million for community and supportive services. A new community center will house the supportive services program, including the case management function, computer learning lab, day care, a family support program, after-school teen program, resident council offices, and housing authority management offices. The community center has been completed, and many of the planned services are operational, including the computer lab. As of March 2003, the housing authority had acquired 235 of the approximately 650 separate parcels of land required for the off-site component of the project. Construction on the first 147 off-site rental units started in September 2002 (see fig. 11), and construction on the first 35 off-site homeownership units is scheduled to begin in June 2003. The decision to construct the off-site units first and on many different parcels of land has been the major impediment to progress. According to housing authority officials, the residents were fearful of being displaced; therefore, they wanted the housing authority to build the new off-site structures first so that they could be relocated to the new off-site units. The housing authority has been going through the lengthy process of acquiring parcels in the surrounding community either by negotiating the purchase of properties or through eminent domain. It also had to relocate 111 private households after acquiring their properties. Financing the redevelopment also has been a challenge. For example, it was difficult to obtain low-income housing tax credits because the state housing finance agency has established strict guidelines. It wants any units developed as part of a mixed-income project to be contiguous. Because the housing authority could not acquire certain properties, there is a break between two sections of off-site parcels. After convincing the state housing finance agency that it would need two tax credit allocations, one for each section of the off-site parcels, and that it should not finance one without the other, the housing authority was awarded tax credits for the first phase of off-site development. Although this process did not delay the revitalization plans, it did make financing the first phase of development more complicated, according to a housing authority official. The City of Tucson Community Services Department, which serves as Tucson’s public housing authority, was awarded a $14.6 million HOPE VI revitalization grant for Connie Chambers in late 1996, as shown in figure 12. The grant was closed out in January 2003. The department was also awarded a fiscal year 2000 revitalization grant for Robert F. Kennedy Homes. Connie Chambers, built in 1967, consisted of 200 units (see fig. 12). The surrounding Santa Rosa neighborhood is historic and home to a lower income population. According to housing authority officials, the primary problem with Connie Chambers was that it was isolated from other communities after construction of a new convention center and police and fire department headquarters. Two out of three households on the public housing waiting list turned it down because of a history of high crime and poor physical conditions. The housing authority was awarded a $370,000 planning grant for Connie Chambers in fiscal year 1995. It used the planning grant to conduct maintenance studies and physical needs assessments and to hold meetings with residents. The total projected budget for the project is $72 million and includes other public housing funds, equity from low-income housing tax credits, city funds, and bond funds. The revitalization plan for Connie Chambers, renamed Posadas Sentinel, calls for rehabilitation of 10 units at another site; construction of 120 on-site units (60 public housing units and 60 tax acquisition of 130 scattered public housing units; construction of 60 homeownership units; construction of a child development center, learning center, and health center and expansion of the existing recreation center; construction of a grocery store; and an elderly building to be built by a nonprofit organization. Of the $14.6 million revitalization grant, the housing authority has budgeted $1.2 million for community and supportive services. The community and supportive services plan, approved in May 1998, calls for a neighborhood services center to serve as a resource center for residents of the neighborhood and the provision of services such as language classes, an expanded child-care program, and job training. The 10 units at the other site have been renovated, all 120 of the on-site units have been completed, and all 130 scattered sites have been acquired (see fig. 12). As of March 2003, 54 of the homeownership units had been completed. The child development center and learning center, located in the Santa Rosa Neighborhood Center, were completed in April 2002. Construction on the recreation and health centers is under way. The housing authority was able to close out the grant in January 2003 because the remaining homeownership units and the recreation and health centers were not financed with HOPE VI funds. A Head Start program has been operating in the child development center since January 2002. Another day-care service, operated by a local nonprofit organization, opened in the center in November 2001. It primarily serves working families. The learning center has been operational since April 2002 and contains a computer library. The learning center offers basic computer classes in either Spanish or English. Because the City of Tucson Community Services Department acts as both the city’s public housing authority and community development agency, it was able to draw on other resources for the Connie Chambers revitalization. Funding for the project includes city funds for infrastructure, general city funds, and bonds. In addition, the state housing finance agency agreed to set aside 10 percent of its annual tax credit allotment for HOPE VI sites. The housing authority has involved the residents and the neighborhoods surrounding the Connie Chambers site in the revitalization process. Both residents and the surrounding neighborhoods were involved in developing the revitalization plan. After the revitalization plan was developed, residents were asked to vote on the plan. Of the 181 Connie Chambers households, 107 participated in the vote. Of the 107 that voted, 84 voted in favor of the plan. Only after the residents expressed their support for the plan did the mayor and city council vote to submit the plan to HUD. When the housing authority determined that some residents did not want to relocate outside the neighborhood, even temporarily, it decided to demolish Connie Chambers in phases, starting at each end of the site. While the first phases were under construction, those who did not want to leave the neighborhood were allowed to live in the remaining units. Once construction was complete, they were moved into the new units, and the rest of the original units were demolished. The Housing Authority of Louisville was awarded a $20 million HOPE VI revitalization grant for Cotter and Lang Homes in late 1996 (see fig. 13), and about 65 percent of the planned units were complete as of March 31, 2003. Cotter Homes, completed in 1953, consisted of 620 units. Lang Homes, built in 1959, contained 496 units (see fig. 13). These two contiguous public housing sites, located in Louisville’s Park DuValle neighborhood, were the largest public housing sites in Louisville. Together, they covered almost 80 acres. Almost 80 percent of the residents in the Park DuValle neighborhood lived in poverty. The neighborhood also had the highest violent crime rate per square mile in Louisville. The local newspaper referred to one corner on the Cotter and Lang site as the “meanest” corner in Louisville. Furthermore, the area surrounding the two sites contained vacant or underused industrial buildings, unused school land, vacant failed subsidized housing, and other available housing development sites. The total projected budget for the project is $200 million and includes other public housing funds, other HUD funds, and equity from low-income housing tax credits. The revitalization plans for Cotter and Lang Homes, renamed Park DuValle, call for 1,213 new units to be completed in five phases. Phase one: development of 100 rental units. Phase two: development of 213 rental units and 150 homeownership units. Phase three: development of 108 rental units (including some elderly units) and 300 homeownership units. Phase four: development of 192 rental units. Phase five: acquisition of 150 off-site rental units. Of the 763 total rental units, 500 will be public housing units, 160 will be tax credit units, and 103 will be market-rate units. The 450 homeownership units will be targeted to households with a variety of incomes. A town center will include space for various types of commercial enterprises. The HOPE VI funds will be used to develop the 150 off-site units and to provide homeownership assistance. Of the $20 million in HOPE VI revitalization grant funds, the housing authority has set aside $3 million for community and supportive services. The focus of its initial community and supportive services plan, approved in August 1998, was lifelong learning programs and services, such as child care, youth programs, and computer training. The developer would provide services to residents of the Park DuValle revitalization area, and the housing authority would provide case management services to former Cotter and Lang residents that were not residing at the Park DuValle site. Work on the first phase of 100 rental units was begun before the housing authority received its HOPE VI revitalization grant, and construction was completed in 1998. The 321 rental units envisioned for phases two and three also have been completed, and construction on the fourth phase of 192 rental units is under way (see fig. 13). Of the 150 planned off-site units, 112 had been acquired as of March 31, 2003. As part of the phase three rental units, a 59-unit senior building was constructed. As of March 31, 2003, the first 150 homeownership units had been sold, and 147 had been completed. Twenty-eight homeowners received soft second mortgages funded by the HOPE VI program. The remaining phase of 300 homeownership units is under way. Because it estimates that it can sell only 4 units a month in the Louisville housing market, the housing authority does not expect all 300 units to be completed and sold until April 2008. The housing authority hired Jefferson County Human Services to provide intensive case management services to former Cotter and Lang residents. The emphasis was on preparing former residents to return to Park DuValle. The developer focused primarily on community building in the new Park DuValle neighborhood. For instance, it served as liaison to the Park DuValle Neighborhood Advisory Council—an organization comprised of former residents of Cotter and Lang, Park DuValle public housing residents, and residents of the surrounding neighborhood. However, the housing authority determined that additional efforts were necessary to ensure that all former Cotter and Lang residents, whether or not they were residents of the new community, had access to services aimed toward increasing self- sufficiency. Therefore, it developed a revised community and supportive services plan, which it submitted to HUD in May 2002. HUD approved the plan in November 2002. According to housing authority officials, support from the city, other local entities, and the local HUD field office has been integral to the success of the Park DuValle project. Both the mayor at the time the grant was awarded and the subsequent mayor were very supportive of the project. The city has provided funds and other resources (e.g., the services of the city’s chief architect). The local school board spent $15 million on a new school in the Park DuValle neighborhood, and the health department spent $5 million on a new health center. Staff from the local HUD field office have also been part of the project team. During planning and much of implementation, a management team comprised of representatives from the housing authority, the city, the local HUD field office, and the developer met weekly to discuss the project. Now that much of the construction has been completed, the team meets about once a month. The leadership of the housing authority’s executive director was another factor cited as contributing to the success of Park DuValle. Housing authority officials noted that, because the executive director formerly worked in the mayor’s office, he has been able to strengthen the city’s support for the project. In addition, according to local HUD officials, the executive director’s relationship with residents was very good. During his tenure as executive director, a public housing resident was named the chairman of the housing authority’s Board of Commissioners. Another factor contributing to Louisville’s success is that the housing authority has not had to make any significant modifications to its revitalization plan. The total number of planned units (1,213) has not changed. The few changes that have been made are minor. For example, the housing authority originally planned for the homeownership units to be constructed in three phases but later decided to consolidate the last two phases for a total of two phases. Also, instead of the 125 homeownership units originally planned in phase two, the housing authority was able to sell 150 units. The housing authority has been able to obtain multiple sources of funding for the project. In addition to the $20 million in HOPE VI funds, the master budget includes $56.2 million in other public housing funds and $20.5 million in other HUD funds. The other sources of funding include $37.2 million in equity from low-income housing tax credits and $56.3 million in debt financing. The state housing finance agency set aside 6 years of tax credits for the Park DuValle project. The Charlotte Housing Authority was awarded a $24.5 million HOPE VI revitalization grant for Dalton Village in October 1996 (see fig. 14). As of March 2003, 194 of 432 total planned units were complete. In addition to the Dalton Village grant, the authority is overseeing two other revitalization grants awarded in fiscal years 1993 and 1998. Dalton Village was built in 1970 and consisted of 300 units in brick townhouse structures with sloped roofs and clapboard facades, as shown in figure 14. The development was located off Clanton Road, an off-shoot from West Boulevard, which was once a major route to Charlotte’s Douglas International Airport. In addition to the presence of lead-based paint and asbestos materials, the structures at Dalton Village suffered from severe deficiencies due to the age of the buildings. The site conditions were very poor with severe erosion taking place over a large portion of the site, and the lack of adequate drainage devices compounded the site problems. Dalton Village was isolated from the adjoining communities by virtue of noncontinuous street access and a steep hill that physically separated it from the neighboring community. The total projected budget for the revitalization project is $44 million, which includes equity from low-income housing tax credits. The revitalization plan for Dalton Village, renamed Arbor Glen, calls for rehabilitation of 50 existing public housing units and the Family on-site construction of 144 family and elderly rental units, including 60 on-site and off-site construction of 175 rental townhouses, including 70 construction of 48 on-site homeownership units, including 20 for public construction of 15 off-site homeownership units designated for public construction of an outreach center for recreational and educational programs. The housing authority has budgeted $4.1 million of the HOPE VI revitalization grant for community and supportive services. The community and supportive services plan, approved in March 2000, calls for services to be provided at the new outreach center, which would house multipurpose classrooms and a full-size multipurpose gymnasium. The focus would be on services and programs that promote self-sufficiency. The 50 existing units and the Family Investment Center have been renovated, and the 144 family and elderly rental units are complete and fully occupied (see fig. 14). The housing authority estimates that construction of the on-site rental townhouses will begin in June 2003 and be completed by June 2004. The housing authority has submitted two tax credit applications—one for an additional 23 on-site units and one for 74 units at an off-site location. In January 2003, the housing authority completed its acquisition of nearby county land needed for the 48 on-site homeownership units, and groundbreaking is scheduled for summer 2003. The $1.5 million outreach center was completed and opened to the public in March 2002. It is an 11,000-square-foot community and recreational center consisting of a gymnasium, four classrooms, and a computer lab. The center is open not only to Arbor Glen residents but also to the entire Arbor Glen community and nearby neighborhoods. It houses recreational and other educational programs. All of the Arbor Glen public housing residents are required to participate in the family self-sufficiency program. A case manager works with participants to develop an individual service plan and to help the residents meet their self-sufficiency goals, such as those related to education and employment. The redevelopment of Arbor Glen was delayed initially because the Charlotte Housing Authority changed development partners. According to housing authority officials, the first developer, signed on in 1998, did not have much development expertise, kept changing financial projections, and did not listen to the community or the state housing finance agency. As a result, the initial developer’s application for low-income housing tax credits was denied. In December 1999, the housing authority signed a new development partner for the site. This developer was part of the initial development team; therefore, the housing authority did not have to issue another request for proposals. Since the new developer was retained, the project has moved forward. The housing authority and the new developer worked to develop a new site plan and development scheme that would be more competitive for tax credits. In late 2000, the project was awarded tax credits for the first phase of new construction. The first phase of 144 units was completed and leased 6 months ahead of schedule. The Jacksonville Housing Authority was awarded a $21.5 million HOPE VI revitalization grant for Durkeeville in October 1996 (see fig. 15). Of the 303 planned units, 228 have been completed. The 280 units in the Durkeeville public housing complex were poorly designed, lacked sufficient ventilation, and had extensive plumbing and drainage deficiencies. For example, the roofs were constructed without an overhang, which exacerbated the deterioration of the outside walls (see fig. 15). Furthermore, the site consisted of mostly small, one-bedroom units that no longer met the residents’ needs for space. Built in 1936, the overall design of the Durkeeville site had become outmoded. Parking was nonexistent, the density of the housing units was twice that of the surrounding community, and a porous design with alleyways instead of roadways provided an environment conducive to criminal activity. By 1990, the Durkeeville site and its surrounding neighborhood had become Jacksonville’s most dangerous community—the violent crime rate for Durkeeville was 12 times higher than for Jacksonville. The neighborhood surrounding Durkeeville was once a desirable middle-class neighborhood. However, low incomes in the neighborhood contributed to low property values, low rents, and little economic activity; over 40 percent of neighborhood households were below the poverty level, according to the 1990 census. The Jacksonville Housing Authority was awarded a fiscal year 1995 HOPE VI planning grant totaling $400,000 for Durkeeville. The total projected budget for the revitalization is about $37 million, which includes other public housing funds left over from the redevelopment of another Jacksonville Housing Authority property. Several key features of the revitalization plan for Durkeeville, renamed The Oaks at Durkeeville, include construction of 200 new rental public housing units (of which 40 will be for seniors and the disabled) and 28 homeownership units on the Durkeeville site; construction of 75 off-site public housing units; renovation and expansion of the community center; renovation of two existing buildings for historic preservation; and retail space containing several businesses and a health clinic. The housing authority plans to set aside $3.1 million of the revitalization grant for community and supportive services. The community and supportive services plan, approved in February 1999, calls for the renovated community center to become a focal point for the entire community and to include a computer lab; community meeting rooms; social service agencies; adult education classes; and recreational facilities, among other programs. The Jacksonville Housing Authority has completed the on-site construction, which includes the 200 rental units (see fig. 15), 28 homeownership units, the renovation of the community center, and rehabilitation of two historic buildings that include a day-care center and resident management offices. Several businesses—including a grocery store, pizza restaurant, Chinese restaurant, and health clinic—have moved into the retail strip adjacent to the site. All of the housing units are occupied. The community center houses the family self-sufficiency program and adult literacy classes, sponsors numerous recreational activities for children, and hosts community meetings. The day-care facility and a museum showcasing Durkeeville’s history are operating on-site. The housing authority does not plan to start the development of the 75 off- site rental units until October 2003. Currently, the housing authority is planning to use a portion of their HOPE VI funds to purchase 75 to 100 apartments and convert them to public housing. According to officials at the housing authority, on-site construction at Durkeeville was completed in a timely manner for several reasons. First, the housing authority was able to develop a sound, comprehensive revitalization plan because HUD awarded it a planning grant in fiscal year 1995. The grant provided the authority with the necessary resources to hire several consultants and invest in extensive outreach to public housing and community residents. Second, the on-site public housing units were funded entirely with public housing funds. The housing authority used only its HOPE VI grant and surplus public housing funds from another rehabilitation project to fund Durkeeville’s redevelopment. The simpler financial structure of the redevelopment shortened the project’s time frames by over 1 year, according to one housing authority official. According to the executive director, in addition to these unique features of the Durkeeville site, the housing authority enjoys the backing of a committed board of directors, which includes prominent Jacksonville real estate developers, attorneys, and former corporate managers. Also represented on the board are the police department, public housing residents, and local businesses. This broad base of support, in conjunction with the executive director’s extensive networking with various government entities, provided the housing authority with key partnerships that helped expedite work on the site. Finally, according to housing authority officials, the decision to place the HOPE VI-related offices in the community center increased the public housing residents’ sense of belonging to a community. The increased number of interactions between public housing and local residents has improved the overall relations between the two groups. This has had an overall positive impact on the entire community. Plans for the off-site portion of the revitalization have not proceeded as smoothly. First, the initial site that the housing authority chose could not get approval by the Environmental Protection Agency. The site was once used for garbage incineration and contains polluted ash in its soil. The housing authority then proposed to purchase a neglected privately owned apartment complex (HUD was going to foreclose the property) and convert all 78 units to public housing, but a local citizens group opposed the plan and took legal action to enforce a court decree from 2000, which states that only 25 percent of any apartment complex the authority buys in an area with a low percentage of minorities can be used for public housing. Ultimately, HUD did not conduct foreclosure proceedings, and the housing authority is currently researching other sites. The Housing Authority of the City of Atlanta was awarded a $20 million HOPE VI revitalization grant for Heman E. Perry Homes (Perry Homes) in late 1996 (see fig. 16), but the revitalization effort did not move forward for some time, primarily because of changes to the revitalization plans. Construction on the first phase of units began in November 2002. The housing authority also has received revitalization grants for the following sites: Techwood/Clark Howell Homes (fiscal year 1993), Carver Homes (fiscal year 1998), Harris Homes (fiscal year 1999), and Capitol Homes (fiscal year 2001). Centennial Place, the name given to the revitalized Techwood/Clark Howell Homes, was largely completed in 2000 and was the first mixed-use, mixed-income community (with public housing as a component) in the nation. Perry Homes and Perry Homes Annex, constructed in 1955, consisted of 944 and 128 units, respectively, and were located on approximately 153 acres of land (see fig. 16). When the housing authority applied for the revitalization grant, the brick exterior walls had deteriorated, resulting in water damage to walls, floors, and personal belongings. The sanitary sewer system leaked, and the storm drainage system did not function properly. From 1992–95, an average of 254 Perry Homes residents were victims of crime each year. In addition, more than 60 percent of the residents of Perry Homes and the surrounding neighborhood were living below the poverty line. The Housing Authority of the City of Atlanta received a $400,000 HOPE VI planning grant for Perry Homes and one other site in fiscal year 1995. In addition to the $20 million revitalization grant, the housing authority also was awarded $5.1 million in fiscal year 1998 HOPE VI demolition funds. The total projected budget for the revitalization of the site is $143 million and includes other public housing funds and equity from low-income housing tax credits. The revitalization plan for Perry Homes, renamed West Highlands at Heman E. Perry Boulevard, calls for 800 new housing units to be constructed in five phases. The construction phases are as follows: Phase one: 124 rental units (50 public housing units, 12 tax credit units, and 62 market-rate units). Phase two: 152 family rental units (61 public housing units, 19 tax credit units, and 72 market-rate units) and 130 elderly rental units (100 project- based Section 8 units and 30 market-rate units). Phase three: 152 rental units (61 public housing units, 14 tax credit units, and 77 market-rate units). Phase four: 142 rental units (56 public housing units, 11 tax credit units, and 75 market-rate units). Phase five: 100 homeownership units (40 units for public housing eligible families and 60 market-rate units). In addition to housing, the plan calls for a town center, an 18-hole public golf course, and over 90 acres of green space in the form of parklands, nature trails, and recreational fields. Of the $20 million revitalization grant, the housing authority has budgeted $2.6 million for community and supportive services. It plans to deliver community and supportive services to Perry Homes residents using two basic approaches. First, it provides authoritywide programs that are available to all public housing residents, including residents of HOPE VI sites. These authoritywide programs include the Human Service Management Program—which provides case management services—and the Work Force Enterprise Program—which equips participants with the skills necessary to manage the transition from unemployment to the workforce. Second, the housing authority plans to ensure that Perry Homes residents have access to neighborhood-based programs. Some of these programs will be offered at a new school, public library, and YMCA. All of the Perry Homes residents have been relocated, and demolition has been completed (see fig. 16). Construction on the first phase of 124 rental units began in November 2002. Construction of the rental and homeownership units is scheduled to be completed by December 2006 and December 2008, respectively. HUD approved the community and supportive services plan for Perry Homes in July 2000, and Perry Homes residents have been participating in authoritywide programs. The developer has hired a human services provider to supply case management services specifically for former Perry Homes residents. Services to be provided include case management tracking and referral services. Construction has not yet begun on the town center, which will include the school, public library, and YMCA. The town center also will include a park, retail, and office space. After the Housing Authority of the City of Atlanta submitted its original revitalization plan for Perry Homes to HUD in September 1998, HUD officials visited the site to discuss issues and concerns that they had about the plan. The plan called for the development of 415 new public housing units on the existing site; the housing authority planned to use only HOPE VI funds and other HUD funds. In a June 2, 1999, letter to the housing authority summarizing its concerns about the plan, HUD questioned whether rebuilding the site entirely with public housing units, without funding to provide meaningful supportive services and without significant partnerships, could result in a sustainable development and provide the maximum benefits to residents. In response to HUD’s concerns, the housing authority came up with a new concept for the Perry Homes site and started developing a new master plan. In December 1999, the housing authority submitted a revised revitalization plan to HUD, which called for a mixed-use, mixed-income community consisting of 750 residential units (40 percent of which would be public housing units), a recreation center, a public library, and a village center. After a developer was selected, the revitalization plan was further refined, and a supplement to the revised revitalization plan was submitted in February 2002. HUD approved the supplement in October 2002, and construction began shortly thereafter. As figure 17 shows, the Chicago Housing Authority was awarded an $18.4 million HOPE VI revitalization grant for Henry Horner Homes in late 1996. However, the planned revitalization of the site has been delayed by a lawsuit filed by residents and subsequent legal decisions. The Chicago Housing Authority’s scattered site program, which includes the development of any nonelderly public housing, has been under judicial receivership since 1987. The housing authority is in the midst of implementing a 10-year transformation plan, which is a $1.5 billion blueprint for rebuilding or rehabilitating 25,000 units of public housing— enough for every leaseholder as of October 1999—and transforming isolated public housing sites into mixed-income communities. The housing authority has also received revitalization grants for the following sites: Cabrini-Green (fiscal year 1994), ABLA (fiscal years 1996 and 1998), Robert Taylor (fiscal years 1996 and 2001), Madden/Wells/Darrow (fiscal year 2000), and Rockwell Gardens (fiscal year 2001). Henry Horner Homes, completed in 1957, and Henry Horner Extension, completed in 1961, consisted of a combination of high-rise and mid-rise buildings containing 1,659 units (see fig. 17). Henry Horner Homes is adjacent to the United Center, the arena where the Chicago Bulls play, and is located about 1.5 miles from Chicago’s central business district. At the time that the housing authority applied for the grant, the units targeted for revitalization had broken windows and doors, sewage backups, insect and rodent infestation, and missing window child guards. The violent crime rates were three to eight times higher than those for Chicago as a whole, and the vacancy rate in the targeted area was about 50 percent. The Chicago Housing Authority was awarded a $400,000 HOPE VI planning grant for Henry Horner and two other sites in fiscal year 1995. In addition to the $18.4 million revitalization grant, the housing authority was awarded a $2.3 million HOPE VI demolition grant for Henry Horner in fiscal year 2000. The total projected budget for the project is $78 million and includes other public housing funds, equity from low-income housing tax credits, and state and city funds. The revitalization plan calls for the construction of 764 new units on-site—271 public housing units, 132 affordable units (80 tax credit rental units and 52 homeownership units), and 361 market-rate units (114 rental units and 247 homeownership units). These units will be constructed in three phases. The housing authority has set aside almost $30,000 of the HOPE VI revitalization grant funds for community and supportive services. Although this amount is small, the housing authority plans to submit a community and supportive services plan for Henry Horner. Over 600 of the planned 1,197 units have been demolished. According to the housing authority, the revitalization plans were developed in such a way as to minimize the temporary relocation of current residents. After the first of three phases of construction is completed, most of the remaining 176 households will be relocated to the new units. Construction on the first phase of units began in January 2003. The first units are expected to be ready for occupancy by the end of 2003. The authority and the Horner Resident Committee are currently negotiating the relocation notices that will go out to the residents. The remaining buildings will be demolished on a schedule negotiated with the Horner Resident Committee. The redevelopment of Henry Horner was delayed for 4 years by legal actions. In 1991, the Henry Horner Mothers Guild filed a suit against the Chicago Housing Authority and HUD alleging, among other things, that Henry Horner had been “de facto” demolished without obtaining HUD or local government approval or providing replacement housing. The case was settled in September 1995 when an amended consent decree was signed. After the housing authority was awarded a HOPE VI revitalization grant for Henry Horner in 1996, the Henry Horner plaintiffs raised concerns about the revitalization plans, including the number of replacement public housing units, which delayed the project and ultimately resulted in two subsequent court orders, issued in December 1999 and February 2000. As a result of these legal decisions, the Chicago Housing Authority is required to designate 220 units or 35 percent of the total units, whichever is greater, as very low-income units. Also, any decisions regarding the revitalization of Henry Horner are subject to the approval of the plaintiffs’ counsel and the Horner Resident Committee. Because any remaining work at Henry Horner is subject to approval by the Horner plaintiffs’ counsel and the Horner Resident Committee, decision- making has been slow. According to housing authority officials, it took the Henry Horner Working Group—which includes the Horner Resident Committee and the Horner plaintiffs’ counsel—about 2 years to develop the revitalization plan and issue a request for qualifications for a developer. It took another 4 months after the request for qualifications was issued to select a developer. The Detroit Housing Commission was awarded a $24.2 million HOPE VI revitalization grant for Herman Gardens in October 1996 (see fig. 18). Construction has not yet begun, and HUD notified the housing commission, for the second time, in March 2002 that it was in default of its grant agreement. The housing commission previously had been awarded revitalization grants for Jeffries Homes (fiscal year 1994) and Parkside Homes (fiscal year 1995). Herman Gardens, built in 1943, originally consisted of 2,144 units on 160 acres (see fig. 18). Problems at the site included structural decay, deterioration of underground utility systems, rodents, and hazardous materials contamination. The Detroit Housing Commission received a $400,000 HOPE VI planning grant for Herman Gardens and two other sites in fiscal year 1995. In addition to the $24.2 million revitalization grant, the Detroit Housing Commission was awarded, in fiscal years 1998 and 1999, $3.8 million in HOPE VI demolition funds for Herman Gardens. The total projected budget for the revitalization of the site is $232 million and includes other public housing funds, equity from low-income housing tax credits, and city funds. The revitalization plan calls for 804 units—470 rental units (including 258 public housing units) and 334 homeownership units. Other elements of the plan include construction of a regional athletic facility on the site and construction of 250,000 square feet of institutional space for a new community college. Of the $24.2 million revitalization grant, the housing commission has budgeted $3.5 million for community and supportive services. The community and supportive services plan, which was approved in August 2001, focuses on case management; employment and training; youth and senior services and activities; and partnerships to address job readiness, placement, and retention. Relocation and demolition have been completed (see fig. 18). As of March 2002, the Detroit Housing Commission had not submitted a revitalization plan for Herman Gardens. Therefore, HUD notified the housing commission on March 15, 2002, that it was in default of its grant agreement and needed to submit a default resolution plan to avoid losing its grant. As part of the default resolution plan, HUD required the commission to meet a number of requirements, including submitting a revitalization plan and obtaining firm financial commitments from the city. The Detroit Housing Commission submitted its revitalization plan for Herman Gardens to HUD in August 2002 and submitted a supplement to the plan in December 2002. In September 2002, the city council passed a resolution committing $22 million to the Herman Gardens project. As of April 2003, HUD had not lifted the default status or approved the revitalization plan. According to a housing commission official, the revitalization plan states that construction is scheduled to begin in January 2004. However, the housing commission has already formed a number of partnerships to provide community and supportive services to Herman Gardens residents. These services include training in retail sales, computers, manufacturing, and child care. Additionally, 18 different unions have formed a partnership that offers a preapprenticeship program. Due to management changes, the Detroit Housing Commission developed several different plans for Herman Gardens. The first plan was developed prior to the grant award and called for 672 units of public housing. Before that plan was formally submitted to HUD, the executive director responsible for the plan left the housing commission and was replaced by an interim executive director. By February 1999, the interim executive director had developed a second plan, which proposed a combination of public and market-rate housing as well as a golf course. After a new executive director was hired, the housing commission proposed a third development concept. Although never submitted as a formal revitalization plan, the concept called for a mixed-use, mixed-income development on the site. Problems at one of Detroit’s other HOPE VI projects also contributed to delays at Herman Gardens. According to a housing commission official, HUD visited all three of its grant sites shortly after the commission developed the second plan for Herman Gardens in February 1999. During the visit, HUD recommended that the commission cease work at Herman Gardens and Jeffries Homes until problems at Parkside Homes were addressed. The Parkside Homes project was over budget and behind schedule. Additionally, once work resumed at Herman Gardens and Jeffries Homes, the Jeffries Homes project seemed to be more of a priority for HUD, according to a commission official. According to commission and local HUD officials, being part of city government has also affected the pace of progress on the project. Until recently, all of the commission’s contracts had to be approved by the city council. Currently, only contracts related to the disposition of land upon which public housing is situated are subject to city council approval. The commission also has to go through the city to hire staff. According to a commission official, the commission is in the process of seeking the authority to hire its own staff. Because it never formally submitted a revitalization plan for Herman Gardens, HUD notified the Detroit Housing Commission in March 2000 that it was in violation of its grant agreement. In December 2000, HUD issued a letter to the housing commission requiring it to develop a default resolution plan. The two parties agreed that the housing commission would submit biweekly progress reports on Herman Gardens. When HUD found these biweekly reports to be inadequate, it notified the housing commission again in March 2002 that it was in default of its grant agreement. In the letter, HUD stated that it had been 52 months since the grant was awarded and no substantial progress had occurred. The Housing Authority of Baltimore City received a $20 million HOPE VI revitalization grant in October 1996 for Hollander Ridge (see fig. 19). Project activity was brought to a standstill by a series of legal actions, and the funds were ultimately transferred to another public housing site in the city of Baltimore. The housing authority will be selling the Hollander Ridge property to the city upon HUD approval. Additionally, the housing authority has completed construction at two HOPE VI sites—Lafayette Courts (fiscal year 1994) and Lexington Terrace (fiscal year 1995)—and is administering four additional HOPE VI grants as follows: Homeownership Demonstration (fiscal year 1994), Murphy Homes and Julian Gardens (fiscal year 1997), Flag House Courts (fiscal year 1998), and Broadway Homes (fiscal year 1999). Hollander Ridge was built in 1976 and was located on 60 acres at the eastern edge of Baltimore City. Hollander Ridge was once the public housing of choice, but over time became one of the most distressed communities in the housing authority’s portfolio. The property had over 1,000 units of family and elderly public housing. By the late 1990s, only half of the units were occupied, and the crime rate soared above the rates of Baltimore’s other public housing sites. Additionally, Hollander Ridge suffered from significant deferred maintenance, extensive site problems, and the deterioration of infrastructure and major building systems (see fig. 19). Because of its isolation, the site’s residents had little access to public transportation and lacked nearby shopping and employment opportunities. The Housing Authority of Baltimore City received a $700,000 HOPE VI planning grant for Hollander Ridge and one other site in fiscal year 1995. Federal legislation was passed in November 2001 that enabled the housing authority to transfer its HOPE VI funds for Hollander Ridge to Claremont Homes. The revitalization plans for Claremont Homes, which are in the preliminary stages, call for the demolition of all existing low-rise buildings and the construction of a new mixed-income development. The housing authority plans to reserve 73 units at the Claremont Homes site for former Hollander Ridge residents. However, according to the housing authority, the legislation enacted in November 2001 that allowed the housing authority to transfer the Hollander Ridge funds to the site must be amended before any of the plans to revitalize Claremont Homes can be implemented. The legislation currently only allows for the rehabilitation of Claremont Homes. As a result of third-party master planning, the housing authority determined that rehabilitation is not financially feasible; therefore, housing authority officials intend to ask Maryland’s congressional delegation to propose an amendment to the federal legislation that would allow demolition and new construction to occur at the site. Concurrence will be sought from the American Civil Liberties Union (ACLU)—the representative of the residents. The authority has submitted a disposition application to HUD for approval to sell the Hollander Ridge site to the city of Baltimore. Legal actions and community opposition halted progress at Hollander Ridge and ultimately led to the transfer of the HOPE VI funds to Claremont Homes. In 1995, six public housing families, represented by the ACLU, filed suit against the Housing Authority of Baltimore City and HUD alleging that they had engaged in racial and economic segregation through site selection and development of public housing in Baltimore City since 1937. On June 25, 1996, the parties entered into a partial consent decree, which was approved by a United States District Court Judge. Among other things, this decree provides that the housing authority “will not seek public housing funds from HUD for public housing construction or acquisition with rehabilitation in Impacted Areas.” The Hollander Ridge site is located in an impacted area, with a high concentration of low-income housing and a high percentage of minority populations. The housing authority’s original plan was to modernize Hollander Ridge by reducing its density through demolition and reconfiguration of existing units and upgrading the housing units and amenities. This plan was consistent with the terms of the partial consent decree, and HUD had awarded the HOPE VI grant on the basis of this plan. However, the adjacent community resisted plans to place any type of public housing back on the site. Community residents had long complained about the site’s high crime rate and its effect on nearby property values. In response to the local opposition, the housing authority decided to abandon plans to rebuild family public housing at Hollander Ridge. The housing authority and the community agreed to a subsequent plan to demolish all of the existing public housing units and replace them with facilities for seniors. The plan called for a senior village, which would provide affordable housing as well as community-based health and wellness programs for low- to moderate-income seniors. All 1,000 units would be demolished, and 450 senior units would be built on-site, 225 of which would be designated as public housing. The housing authority also agreed to build a $1.2 million fence around the entire Hollander Ridge site. Because the plans for a senior village would violate sections of the partial consent decree and residents would be displaced, the ACLU maintained strict opposition to the senior village concept. Nevertheless, the housing authority sought a modification to the decree that would allow the development of public housing on the Hollander Ridge site. In January 1999, the U.S. District Court approved this request. On July 8, 2000, Hollander Ridge was imploded. Just a few days later, the Fourth Circuit Court of Appeals, responding to an ACLU appeal, reversed the District Court’s order. On July 31, 2000, HUD declared the grant to be in default. Federal legislation enacted in November 2001 allowed the housing authority to transfer the funds to its Claremont Homes site. As shown in figure 19, Hollander Ridge remains a vacant lot. The Holyoke Housing Authority received a $15 million HOPE VI revitalization grant in October 1996 for Jackson Parkway (see fig. 20). Fifty- one of the 272 planned units have been completed. Jackson Parkway was built in 1943 and contained 219 units on a 12.5-acre site in the Churchill section of Holyoke (see fig. 20). According to housing authority officials, the apartments and their residents were isolated from the economic and social fabric of the surrounding community. In addition, the units were run-down and unappealing. The immediate neighborhood adjacent to Jackson Parkway was marked by abandoned, obsolete, and vacant buildings and was affected by drug dealing and vandalism. The Churchill neighborhood formerly was a residential center for mill workers and other laborers. However, by the 1990 census, the neighborhood’s residents had a 50 percent school drop-out rate and only 37 percent participated in the workforce. Because Jackson Parkway contained almost 25 percent of all residential units in the Churchill neighborhood, its revitalization was seen as pivotal to the success of future improvements in the area. The revitalization of Jackson Parkway is estimated to cost around $47 million—which includes other public housing and HUD funds, other federal funds, and equity from low-income housing tax credits—and will occur in three phases. The first phase will consist of the demolition of 219 units and a 42-unit elderly complex and the construction of 50 public housing units, 60 homeownership units, a park, a community center, and a maintenance facility. The second phase will consist of the rehabilitation of two, five-story walkups, which will result in 39 public housing units, and the construction of 11 new public housing units. In the third phase, 112 units will be rehabilitated or constructed in the surrounding neighborhood. The new community will be called Churchill and Oakhill Homes. Of the $15 million revitalization grant, $700,000 has been set aside for community and supportive services. The focus of the community and supportive services plan, approved in March 1998, is to implement a comprehensive on-site service delivery system to coordinate existing health and human services with innovative educational and employment opportunities. The Holyoke Housing Authority plans to partner with numerous schools, universities, churches, career development organizations, libraries, and the Chamber of Commerce to implement its self-sufficiency programs. Of the 272 total units to be rehabilitated or constructed, 51 have been completed. The 50 new public housing units planned for phase one were built and fully occupied in summer 2002 (see fig. 20). Additionally, all planned phase one demolition has been completed. The community buildings are in the design phase, and work on the community park has begun and is expected to be completed by summer 2003. One model homeownership unit has been completed. Also, 270 applications to purchase the 60 homeownership units have been received. Selective demolition has begun for phase two—the rehabilitation of two, five-story walkups. Additionally, land has been cleared and footings and foundation walls have been set. These units are to be completed in the fall of 2003. The housing authority is working with the Catholic Diocese of Springfield and Habitat for Humanity to build new homeownership units on one complete city block. This will be the third and final phase of the revitalization. By the spring of 2000, a resident services department was established and operating to address the needs of former Jackson Parkway residents. Each Jackson Parkway resident was assessed by one of three case managers, who help residents to find employment, acquire GEDs, take English as a Second Language courses, and receive homeownership counseling. Several factors contributed to delays early in the revitalization process. Because Jackson Parkway was the authority’s first experience with the HOPE VI program, its staff had to overcome an initial learning curve. For example, the staff had to learn about real estate development and low- income housing tax credits and about how to work with developers. Also, HUD’s Inspector General charged the housing authority with procurement violations related to the selection of its first developer. According to HUD officials, they placed procurement review restrictions on the authority because of the lack of sufficient in-house procurement expertise. These restrictions delayed the authority’s ability to obtain an infrastructure contractor and a developer for the site. One housing authority official estimated that the procurement charges delayed the progress of the grant by 1 year. Additionally, approval of key documents took longer than expected. For example, approval of the revitalization plan took 23 months and approval of the mixed-finance proposal for the first phase took 6 months. The housing authority has had seven different HUD HOPE VI grant managers since 1996, and staff believe that this frequent rotation caused temporary disconnects that resulted in delays. The Chester Housing Authority was awarded a $14.9 million HOPE VI revitalization grant in October 1996 for Lamokin Village (see fig. 21). Construction is complete, and all 150 units are occupied. Since 1994, the housing authority has been under judicial receivership resulting from a resident lawsuit concerning distressed housing conditions. The housing authority also was awarded a fiscal year 1998 HOPE VI revitalization grant for Wellington Ridge. Lamokin Village was built in the early 1940s and consisted of 38, two- and three-story buildings, totaling 350 units. The site suffered from substantial deterioration; major system problems, such as piping leaks and water table problems; and poor site conditions (see fig. 21). The site also had significant design problems due to its dense, maze-like building configuration with no interior streets. According to the Chester Housing Authority, Chester has been a distressed community for decades. About 56 percent of the population of Chester receives some form of government assistance, and HUD has ranked Chester as the most depressed city of its size in the United States. The housing authority was awarded a fiscal year 1995 HOPE VI planning grant for Lamokin Village and one other site as a part of the overall recovery plan for the city. The total amount budgeted for the redevelopment of Lamokin Village is $27 million, which includes other public housing funds and equity from low- income housing tax credits. The revitalization plan for Lamokin Village, renamed Chatham Estates, calls for three phases: (1) 22 new residential buildings with a mix of 110 one-story and duplex row homes, (2) a 40-unit senior building, and (3) 30 off-site homeownership units. All existing units in Lamokin Village were to be demolished. Of the $14.9 million revitalization grant, the housing authority budgeted about $1.2 million for community and supportive services. The community and supportive services plan, approved in December 1997, proposes a comprehensive welfare-to-work strategy designed to cultivate the economic self-sufficiency of Lamokin Village residents. Specific plans include the establishment of a “one-stop shop” for social services, a community center and educational facility to be built on-site, and a comprehensive evaluative component that will examine the impact of HOPE VI on the Chester community. The 150 units, including the 40-unit senior building, planned for phases one and two are 100 percent complete and occupied (see fig. 21). Thirty-eight former residents returned to the family rental units, and 21 former residents moved to the senior building. The third phase of the plan is being transferred to the housing authority’s fiscal year 1998 HOPE VI revitalization grant. The authority did establish an interagency “one-stop shop” in 1998 that is used as the coordinating point for all programs and partners servicing the authority’s residents. The shop is located in the Chester Crozier Hospital, along with various other social service agencies. For example, the Chester Education Foundation provides an employment program at the hospital. The authority has also included a family self-sufficiency component, which is optional for residents and provides services such as case management, computer hardware and software training, van transportation, homeownership training, and entrepreneurial training. The supportive services funding was expended before construction of the community and educational center could begin; the authority is currently trying to raise additional funding for this center. Finally, Widener University’s School of Social Work has been evaluating impacts and outcomes of HOPE VI initiatives in Chester since 1997. In 1994, the Chester Housing Authority was placed on HUD’s troubled status list after receiving an extremely low evaluation score. During this same period, a federal judge appointed a federal court receiver for the housing authority in an effort to transform the authority. The receivership is scheduled to end in June 2003. According to officials at the local HUD field office, the receiver has brought about many positive changes for the housing authority and its residents, including the two HOPE VI revitalization grants. In 2002, the authority received a high evaluation score, placing it in HUD’s high-performer category. The receiver ensured that the authority had the proper staffing and knowledge to administer its HOPE VI grants. Additionally, the authority brought the president of the resident council on staff, helping to rebuild the relationship between the authority and its residents. The receiver also created a separate police force to increase the safety and security of the authority’s public housing sites, the lack of which had been a major complaint of former residents. Finally, during the receivership, all of Chester’s public housing family units have either been demolished or rehabilitated. Relying primarily on public housing funds simplified the development process. Tax credit equity was only used to finance the construction of the 40-unit senior building. The remainder of the redevelopment was financed by HOPE VI and other public housing funds. In addition, the housing authority elected to act as its own developer of the family units. Finally, all units were constructed on-site, thus the housing authority did not have to purchase additional property. The San Francisco Housing Authority was awarded a $20 million HOPE VI revitalization grant for North Beach in October 1996. Construction at the site did not begin until November 2002 (see fig. 22). The housing authority has also completed three sites with two HOPE VI revitalization grants— Bernal/Plaza (fiscal year 1993) and Hayes Valley (fiscal year 1995)—and construction at its Valencia Gardens site (fiscal year 1997) is scheduled to begin later this year. Located adjacent to Fisherman’s Wharf and surrounding the historic cable car turnaround, North Beach is situated in the heart of San Francisco’s tourist attractions. The site is surrounded by a busy, densely built, vibrant neighborhood that is well-served by public transportation, schools, shopping, and services. However, North Beach itself has been a pocket of poverty, with residents earning, on average, only 17 percent of area median income. The site was built in 1952 and consisted of 13 concrete buildings with 229 walk-up units, which filled two city blocks (see fig. 22). It was poorly designed with large amounts of indefensible space that became havens for criminal activity. Due to repeated earthquake stress, the buildings were weakening and had substandard major systems, including sewer and plumbing. A $400,000 HOPE VI planning grant awarded in fiscal year 1995 for North Beach funded a study of the site. The study determined that due to the dilapidated condition of the site and the high crime rate in the area, complete neighborhood revitalization would be essential to any redevelopment plan. In addition to the $20 million revitalization grant, the San Francisco Housing Authority was subsequently awarded a $3.2 million HOPE VI demolition grant for the North Beach site in fiscal year 2001. The total projected budget is $106 million—up from the $69 million estimated in 1996—and includes other public housing funds, other HUD funds, other federal funds, and equity from low-income housing tax credits. The revitalization plans call for 341 units. The 341 units will be divided as follows: 229 public housing units, which will be a one-for-one replacement for the units that were demolished on both the east and west blocks and 112 rental apartments for families with incomes below 50 percent of the city median income. Also included in the plans are a parking garage for 323 cars and commercial and retail space surrounding the cable car turnaround area. Approximately $1.5 million of the revitalization grant was set aside for community and supportive services. This service component was created to provide residents with opportunities to achieve self-sufficiency through education, employment, and entrepreneurship. The community and supportive services plan, approved in May 2001, calls for a commitment to lifelong education that includes the development of basic intellectual skills, specific training for particular types of employment, and a focus on life skills such as parenting. Relocation, abatement, and demolition of both the east and west blocks has been completed (see fig. 22). California awarded the authority $55 million in tax credits in the spring of 2002 for the North Beach site, the largest award in California history. With this additional funding, the housing authority was able to begin construction at the site in November 2002. About half of all residents currently participate in community and supportive services. Participants create an individual plan with a case manager, who then directs the resident to the various services offered, such as employment assistance, computer, and English as a Second Language classes. Additionally, 30 residents from North Beach are enrolled in the housing authority’s family self-sufficiency program. Program participation enables each household to receive up to $1,200 for training in various trades. According to housing authority officials, the primary factor contributing to delays at North Beach was resident resistance. To address resident concerns regarding relocation, a former executive director initially promised residents that the redevelopment would occur in two phases, which meant that they would not have to be relocated off-site. However, the housing authority later determined that this option would be too expensive, and that the residents would have to be relocated off-site so that redevelopment could occur all at once. The residents were not happy with this decision and were very reluctant to move out of their apartments. Funding shortfalls have also contributed to delays at the North Beach site. San Francisco’s original HOPE VI application requested $30 million to complete the revitalization of North Beach. Because HUD only awarded them $20 million, making up the difference has been difficult. The authority had to add 112 units to the plan in order to convince the city to provide $10 million in funding assistance. According to housing authority officials, now that the project has been awarded $55 million in tax credits, the pace of the redevelopment should accelerate. Administering over $118 million in HOPE VI funds for five sites simultaneously has been challenging for the authority’s staff. The housing authority has a history of management and financial problems that have affected its redevelopment efforts. HUD took over the housing authority in 1996 after the Mayor of San Francisco requested HUD’s assistance. The authority had managerial problems, high crime at its public housing developments, and problems with the physical condition of its housing stock. After implementing new policies and procedures and reorganizing the housing authority, HUD returned it to local control in 1997. Several years after the housing authority was returned to local control, it developed financial difficulties and again sought HUD’s assistance. HUD continues to monitor and provide assistance to the housing authority. Another factor that delayed the North Beach redevelopment was environmental problems on-site. Half of the units contained lead paint and asbestos, and the site’s soil had some arsenic, mercury, zinc, and lead contamination (due to the site’s early industrial history). As a result, the city required additional environmental reviews before it gave its approval to begin construction. The Cuyahoga Metropolitan Housing Authority was awarded a $29.7 million HOPE VI revitalization grant for Riverview and Lakeview Terraces in October 1996 (see fig. 23). Although the housing authority has completed relocation and demolition, the rehabilitation of units at Lakeview has been slow, and little progress has been made with the construction of new units at Riverview. The housing authority has been awarded two other HOPE VI revitalization grants: a $50 million grant in fiscal year 1993 for Outhwaite Homes/King Kennedy, which is complete, and a $21 million grant in fiscal year 1995 for the Carver Park site. Riverview, completed in 1963, consisted of 143 family units and 501 elderly units (see fig. 23). Lakeview, completed in 1932, contained 570 family units and 214 elderly units. Riverview and Lakeview are neighboring public housing sites, which collectively housed 715 elderly units and 713 family units. Riverview is on unstable ground, which includes numerous sinkholes. Both developments are located in the Ohio City neighborhood, home to the West Side Market, which has been in operation since the 1880s and attracts around 1 million visitors each year. Due to its age, the Lakeview units had many problems, including high lead levels, lack of parking, and obsolete underground plumbing and storm lines. In addition, the majority of the Lakeview units were one- and two-bedroom units, while the local demand is for three-bedroom and larger units. The total projected budget for the Riverview/Lakeview revitalization is about $112 million, which includes other public housing funds, other federal funds, equity from the sale of low-income housing tax credits, bank financing, and other local funds. The current revitalization plan calls for 95 new public housing units, 240 rehabilitated public housing units, and 345 new market-rate and moderate-income units. For Riverview, there are plans to construct 45 public housing units on-site and 50 off-site, to acquire 54 off-site public housing units, and to construct 228 market-rate and 117 affordable (tax credit) units. At the Lakeview site, there are plans to renovate 186 public housing units and a community center. There are also plans for site improvements, including the demolition of garage compounds. Of the $29.7 million in HOPE VI funds, the housing authority plans to set aside $5.8 million for community and supportive services. The goals of its community and supportive services plan, approved in July 2000, are to track and provide services to Lakeview residents and relocated families from Riverview, make all interested residents meet the qualifications for moving into the newly renovated units, and help Lakeview and Riverview residents make the transition from welfare to work. The renovation of the first 56 units at the Lakeview site is under way, and six units have been completed (see fig. 23). The demolition of the garage compounds and rehabilitation work are moving along as scheduled, according to the housing authority. The relocation of 98 households and demolition of 135 units is complete at the Riverview site (see fig. 23). The housing authority has also acquired 54 single-family homes in scattered sites, which are fully occupied, but the construction of new units is not scheduled to begin until October 2004. In June 2002, the housing authority received an award for its plan for the Riverview site from the Congress for New Urbanism. The housing authority is in the process of executing a development agreement. Case management activities are in progress for 343 Riverview and Lakeview residents. These residents participate in a range of activities, including entrepreneurial and employment training and educational programs. The housing authority is also in the process of implementing a new system for ensuring that residents can receive the job-training services that they need by using vouchers to purchase services. The housing authority was experiencing internal problems when the grant was awarded in 1996. The prior administration was not following appropriate procurement procedures, according to HUD officials, and the former executive director was ultimately convicted for theft of public funds, mail fraud, and lying about a loan. A new executive director was hired in late 1998, and the housing authority was finally able to focus on the HOPE VI grant in 1999. The project has also experienced delays due to cost constraints, consideration of community and resident input, and problems with the site. First, the housing authority requested $40 million to implement its revitalization plan, but it was awarded $29.7 million. As a result, it took time for the housing authority to obtain other funding. Next, the housing authority did not originally plan to put public housing back on the Riverview site because the land was sloping and unstable. Due to community and resident opposition to this plan, the housing authority agreed to put public housing units back on-site. Subsequent analysis by an engineering firm revealed that certain areas were stable enough for new construction. Similarly, while the housing authority originally planned to modernize 12 of the buildings at Lakeview, it later revised these plans to include modernization of an additional 66 row-house units. The Wilmington Housing Authority was awarded an $11.6 million HOPE VI revitalization grant for Robert S. Jervay Place (Jervay Place) in October 1996 (see fig. 24). Relocation and demolition at Jervay Place are complete, but construction has been slow to start. Jervay Place, constructed in 1951, was made up of 30, two-story, brick buildings that housed 250 units on 14 acres of land (see fig. 24). The building configuration yielded limited defensible space for each dwelling unit and rendered the site vulnerable to criminal activity. The site needed renovation, lead-based paint removal, asbestos abatement, and modifications for the handicapped. In addition, the resident population consisted of young, welfare-dependent, single-parent families. The total projected budget for the Jervay Place revitalization is $33 million, which includes equity from low-income housing tax credits, other grants, and private debt. The revitalization plans called for 190 new units to be developed at Jervay Place and surrounding sites in four phases, excluding a phase dedicated to the implementation of community and supportive services. The construction phases are as follows: construction of 14 for-sale or lease-purchase units on the original site; construction of 60 units and a community center on the original site and construction of 44 for-sale or lease-purchase units on the original site; construction of 32 scattered site for-sale or lease-purchase units. Of the 190 new units, 71 would be public housing units, 29 would be financed with a combination of low-income housing tax credits and project-based Section 8, 28 would be lease-purchase units, and 62 would be other subsidized homeownership units. A 7,000-square-foot, commercial- retail space will also be constructed on-site, but the housing authority has not determined in which phase this will be done. Of the $11.6 million in HOPE VI funds, the Wilmington Housing Authority planned to set aside $1.5 million for community and supportive services. The focus of its service efforts would be transportation, job training and placement, education, health care, and child care. The housing authority also planned to establish partnerships with local schools and businesses. Relocation, demolition, and 4 of the 14 phase one homeownership units have been completed, and construction of the next 5 units is under way (see fig. 24). For phase two, construction began in November 2002, and tax credits have been approved. For phase three, the housing authority is working on its homeownership plan. The final phase of construction has not begun. The housing authority estimates that all of the units will be complete in August 2005. HUD approved the housing authority’s community and supportive services plan in February 1999. The housing authority administers services through its family self-sufficiency program, through which case managers are assigned to work with individual households and match them with appropriate services. Case managers have worked with participants to assist them with their self-sufficiency goals, including working with residents to prequalify them to purchase the homes constructed in phase one. Residents who wish to return to Jervay Place must be enrolled in this program. As of January 2003, 62 of the 132 original residents were enrolled. The procurement of the initial development partner was legally challenged by one of the other bidders. According to HUD, a considerable amount of time was spent resolving this issue, and HUD’s Office of General Counsel ultimately determined the challenge was unfounded. However, the housing authority and the initial developer did not work well together, and the developer was released in July 1999. A new developer was hired in April 2001, and HUD assigned an expediter—a private-sector expert in finance, real estate development, or community revitalization—to help move the project. Both the housing authority and the second developer had to work through resistance from the community and residents, who did not understand the plans because they were not involved in the planning by the previous developer and who were frustrated by the lack of progress at Jervay Place, according to housing authority officials. As a result of these issues, the housing authority did not submit its revitalization plan until December 2000. HUD approved the plan in October 2001. According to housing authority officials, revitalization also has been adversely affected by the city’s and HUD’s slow approval processes. For example, while the city informed the housing authority in August 2001 that its site plan had been approved, it was informed in December 2001 that the site plan should not have been approved because the setbacks, the space between the building area and the property line, were incorrect. As a result, the site plans had to be changed and resubmitted to obtain the city’s approval. Similarly, housing authority officials stated that HUD’s slow approval process has contributed to delays. For example, it took HUD 5 months to conditionally approve the revitalization plan. In addition, housing authority officials stated that they had to take out a line of credit to begin construction because HUD was taking too long to make the grant funds available. According to HUD, approval could not be completed until the housing authority fulfilled several conditions, including submission of a mixed-finance proposal, a revised implementation schedule, proposed unit designs, and a revised HOPE VI budget. In addition, the HUD grant manager assigned to the housing authority was responsible for closing six mixed-finance deals as well as reviewing new HOPE VI grant applications during this time frame. The Chicago Housing Authority was awarded a $25 million HOPE VI revitalization grant in October 1996 for Robert Taylor Homes B (see fig. 25). Relocation and demolition are complete, and approximately one-quarter of the planned units have been constructed. The housing authority’s scattered site program, which includes the development of any nonelderly public housing, has been under judicial receivership since 1987. The authority is in the midst of implementing a 10-year transformation plan, a $1.5 billion blueprint for rebuilding or rehabilitating 25,000 units of public housing— enough for every leaseholder as of October 1999—and transforming isolated public housing sites into mixed-income communities. The authority was awarded a revitalization grant for Robert Taylor A in fiscal year 2001 and has also received grants for the following sites: Cabrini- Green (fiscal year 1994), ABLA (fiscal years 1996 and 1998), Henry Horner (fiscal year 1996), Madden/Wells/Darrow (fiscal year 2000), and Rockwell Gardens (fiscal year 2001). The Robert Taylor Homes consisted of over 4,300 units in 28 detached, 16- story buildings along Chicago’s State Street corridor, a 4-mile stretch of five different public housing sites (see fig. 25). It was the nation’s largest, most densely populated public housing enclave. The Robert Taylor Homes were divided into two subsites called Robert Taylor A and B. The fiscal year 1996 HOPE VI revitalization grant is for Robert Taylor B, which was constructed between 1959 and 1963, and consisted of 2,400 units spread over 16 high- rise buildings. The surrounding neighborhood included many boarded-up buildings, vacant lots, and a few small businesses. However, the site also is near bus and train services and a technical vocational school. In addition to the revitalization grant for Robert Taylor B, the Chicago Housing Authority was subsequently awarded a $6.3 million HOPE VI demolition grant in fiscal year 2000 and a $13 million HOPE VI demolition grant in fiscal year 2001. The total projected budget for the Robert Taylor B revitalization is $113 million, which includes other public housing funds, other federal funds, conventional debt, and equity from the sale of low- income housing tax credits. The revitalization plans call for the demolition of 762 units and the construction of 251 public housing units in scattered off-site locations throughout the surrounding neighborhoods. Of the $25 million revitalization grant, approximately $1.5 million has been budgeted for community and supportive services. The community and supportive services plan was submitted and approved in June 1998. The plan states that the housing authority will provide case managers to monitor families’ progress in meeting goals established in self-sufficiency plans. The plan also allowed for the housing authority to use a Boys and Girls Club to deliver self-sufficiency activities until a community center was constructed in 1998. The services provided would include a combination of employment; education; and family services, such as child care and health care. As shown in figure 25, a 116-unit site, referred to as The Langston, has been constructed and is at capacity. Twenty-nine of these units are public housing units and are occupied by former residents of Robert Taylor A and B. The remaining units are a mixture of tax credit and market-rate units. Construction of a second site, referred to as The Quincy, is also complete. The Quincy has 107 units, including 27 public housing units, which are fully occupied. The remaining units are also a mixture of market-rate and tax credit units. In February 2003, HUD approved the combination of the 1996 grant for Robert Taylor B with the 2001 grant for Robert Taylor A for planning and implementation purposes as well as the extension of certain grant agreement deadlines affecting the 1996 grant. As a result, while the housing authority is still obligated to complete 195 more public housing units under the 1996 grant, these units will be developed as a part of a new three-phase Robert Taylor Master Plan. Construction on the first phase of this plan is scheduled to begin in late 2003. The housing authority is currently in the process of revising its community and supportive services plan to incorporate its service connector program, in which case managers work individually with residents to provide either necessary services or refer them to the appropriate providers. The housing authority is in the process of locating the original residents, finding out whether they are using any supportive services through the housing choice program, and determining what services they need. According to the housing authority, the primary service provided to the original residents has been relocation assistance. In addition, the Charles Hayes Family Investment Center opened in September 1998 adjacent to the original site, offering a one-stop source for computer training, job placement, medical, and other supportive services. The revitalization of Robert Taylor B has been slowed by tension early in the relationship between the Chicago Housing Authority and its receiver and by the need for the plans to comply with the Gautreaux consent decree. In 1966, African American residents of the Chicago public housing community filed suit against the housing authority for creating a segregated public housing system. In response, the court issued a judgment that prohibits the housing authority from constructing any new family public housing in a neighborhood in which more than 30 percent of the occupants are minorities (limited areas) unless it develops an equal number of units in neighborhoods where less than 30 percent are minorities (general areas). In 1987, the court appointed a receiver for Chicago’s scattered-site program, which includes the development of nonelderly public housing. According to a housing authority official, the first delay at Robert Taylor B occurred because the housing authority did not develop its revitalization plan with the input of the receiver. The housing authority submitted the plan to HUD in January 1998, and 9 months later HUD informed the housing authority that it could not act on the plans without the concurrence of the receiver. It took over 1 year for the housing authority and the receiver to revise the plans together and to address HUD’s specific concerns. HUD approved the plan in December 1999, but it only partially approved the HOPE VI budget because the housing authority and the receiver had not come to agreement on the receiver fee. The determination of how grant funds should be dispersed between the housing authority and the receiver was not finalized until May 2000. The housing authority also has experienced difficulty obtaining off-site locations for the balance of the public housing units that need to be constructed. To address this difficulty, the housing authority has proposed combining the revitalization efforts of Robert Taylor B with the revitalization funded under the fiscal year 2001 Robert Taylor A grant. The housing authority is working on obtaining a revitalizing order for the Robert Taylor community, which would waive the Gautreaux restrictions. Revitalizing orders allow the construction of new family public housing units in limited areas without requiring an equal number of units to be built in a general area. The revitalizing circumstances must support a reasonable forecast of economic integration, with the longer term possibility of racial integration. The housing authority hopes that it can use the work already completed with the Robert Taylor B grant to show that the area is being revitalized. Finally, receipt of the fiscal year 2001 HOPE VI grant for Robert Taylor A has slowed progress at Robert Taylor B. After receiving this grant, the housing authority took time to develop a master plan to coordinate the development of both Robert Taylor A and B. The master plan allows the housing authority to combine the grants for planning purposes, although they remain administratively separate. In addition, the Robert Taylor site has not consistently been a top priority for the housing authority. According to a housing authority official, other sites that are further along have been selected to get the majority of the housing authority’s time, energy, and resources. The Housing Authority of New Orleans was awarded a $25 million HOPE VI revitalization grant for St. Thomas in 1996. Although relocation and demolition have been completed, no new units have been constructed (see fig. 26). The housing authority is currently under administrative receivership. The housing authority was also awarded a HOPE VI revitalization grant for the Desire site in fiscal year 1994. St. Thomas, completed in 1941, consisted of 1,510 public housing units on almost 50 acres (see fig. 26). The site was located in a mixed-use neighborhood close to the central business district and the Garden District. The neighborhood in which St. Thomas is located was recently designated as a historic district. St. Thomas had a vacancy rate of 50 percent when the Housing Authority of New Orleans applied for the HOPE VI grant. The original site had a density of approximately 30 units per acre and contained long spaces between buildings, which were conducive to criminal and violent behavior. Moreover, underground utilities were either obsolete or deteriorated. Stormwater flooding and sanitary line overflows were common. The odor of sewage was pervasive throughout the site. In addition to the revitalization grant, the Housing Authority of New Orleans was awarded a HOPE VI demolition grant in the amount of $3.5 million to demolish 701 units at St. Thomas. With funds from the city, state, tax-exempt bonds, and other sources, the total projected budget for the revitalization of St. Thomas is $293 million. The revitalization plans call for a total of 1,238 units, including construction of 182 on-site public housing units, 107 on-site public housing eligible rental units, 15 on-site affordable homeownership units, 100 off-site public housing eligible rental units, and 50 off-site affordable homeownership units; construction of a 200,000-square-foot retail center on 17 acres adjacent to the site; and historic preservation and renovation of five of the original St. Thomas buildings. Of the $25 million revitalization grant, the housing authority plans to spend $4 million on community and supportive services. The housing authority will attempt to contact all of the original St. Thomas households and conduct assessments of their needs. On the basis of these assessments, a detailed case management plan will be drafted. The St. Thomas community and supportive services plan, which HUD approved in July 2001, documents goals and objectives for achieving self-sufficiency for the residents of St. Thomas in the following areas: employment and income generation, education, training, homeownership training and assistance, health, strengthening families, and services to build community leadership. The St. Thomas site has been cleared, but construction has not yet started (see fig. 26). The relocation of 739 families was completed in June 2001, and demolition of 1,365 units was completed in December 2001. As of April 2003, infrastructure work at the St. Thomas site was 60 percent complete. The transfer of property from the housing authority to the retail developer for the construction of the retail center is scheduled to occur by June 2003. This property transfer is contingent upon the housing authority’s submission of documents to HUD for the closing of the first phase of construction on residential units, an escrow deposit from the developer to guarantee the construction of residential housing, and the environmental clearance for the retail site. State economic development bonds were approved in December 2002, which enabled negotiations regarding the retail center to progress. The historic preservation of five of the original St. Thomas buildings also has begun. The housing authority has hired Kingsley House, a social service provider located near the St. Thomas site, to perform assessments and provide case management plans in accordance with the community and supportive services plan. The Kingsley House, established in 1896, administers a variety of programs from Head Start to adult day care. Assessments have been conducted on 451 of the 739 families that were affected by the redevelopment plans. According to housing authority officials, progress has been delayed due to funding shortfalls. Although the housing authority requested $40 million, HUD awarded $25 million, which was not enough to revitalize the St. Thomas site. Similarly, the city could provide $6 million of the $20 million needed for infrastructure at the site. As a result, the developer had to take time to identify other funding sources. Moreover, it took approximately 2 years from the time that the developer told HUD its intentions to employ tax-increment financing (TIF) until the New Orleans City Council approved it. Approval of the TIF was delayed due to public pressure against the TIF concept and the project itself. Moreover, the state bond commission did not approve the issuance of bonds until December 2002, after nearly 6 months of delays due in part to the need to complete environmental review processes. Also, although the housing authority selected a developer in September 1997, the HUD Office of Inspector General identified problems with the selection process. Specifically, the Inspector General found that the housing authority allowed the majority of the selection panel members to be nonhousing authority individuals. The Inspector General also found that the interaction of the initial developer with certain members of the selection panel and St. Thomas residents constituted both a perceived and actual conflict of interest. As a result, the housing authority selected a new developer in October 1999. Once selected, the new developer reconfigured the revitalization plan. Delays continue because the St. Thomas site is located in a historic district. Preservationists opposed demolition of existing buildings and the construction of the retail center because of its size, design, financing, impact upon traffic, and negative effect upon local businesses. The housing authority consulted with environmental and preservationist groups and executed a Memorandum of Agreement in September 2000 that stipulated the preservation of five of the original St. Thomas buildings and a warehouse as well as other measures aimed at minimizing adverse environmental impact in and around St. Thomas. Consultation began in 2001 for an amended Memorandum of Agreement to consider the retail component proposed for the site. In July 2002, a nonprofit organization filed a lawsuit against the housing authority and HUD (1) stating that they were not in compliance with environmental and historic preservation laws and (2) seeking HUD to withhold all HOPE VI funds from the housing authority. Since the filing of the lawsuit, HUD has completed a supplemental environmental assessment and has published a finding of no significant impact. Moreover, the housing authority, HUD, and other parties have executed an amended Memorandum of Agreement. The case was reopened in March 2003, but it was dismissed by a judge in April 2003. Finally, the Housing Authority of New Orleans has had a long history of management problems, and its public housing has long been in very poor condition. In 1996, HUD entered into a “cooperative endeavor agreement” with New Orleans to correct problems at the housing authority. Under this agreement, HUD dissolved the housing authority’s board of commissioners and chose a HUD representative as Executive Monitor to oversee the authority’s progress in implementing improvements. In 2002, after the housing authority had made little progress, HUD took control of its management and operations. According to HUD officials involved in the receivership, they are working on reallocating staff resources, reorganizing the housing authority’s structure, and cutting back on unnecessary expenditures. The Housing Authority of Kansas City, Missouri, received a $13 million HOPE VI revitalization grant for Theron B. Watkins in November 1996 (see fig. 27). This grant has funded the revitalization of the Watkins site and will fund additional revitalization plans at another site and off-site units. The authority has had numerous problems related to management and maintenance of its properties, and it was placed under judicial receivership in 1994. The authority also was awarded three other HOPE VI revitalization grants—a fiscal year 1993 revitalization grant for Guinotte Manor, a fiscal year 1997 revitalization grant for Heritage House, and a smaller revitalization grant for Heritage House awarded in fiscal year 1998 that is complete. For many years, the Theron B. Watkins site served as the symbol for urban decline in Kansas City. With its deteriorated structures, large open entryways, and outdated and neglected electrical systems, the site suffered from many of the same problems identified in housing of similar design throughout the country. The site was built in 1953 and contained 288 units in 22, three-story buildings. In the late 1980s, living conditions at the site began to deteriorate at a rapid pace with drug dealing and related crime rampant; units in disrepair and neglect; and the housing authority unable to address problems due to its mismanagement problems. These conditions created an unsafe living environment that prompted residents to vacate the site in large numbers. Upon the arrival of the receiver in 1994, problems at the site included a 43 percent vacancy rate; enormous backlogs of uncompleted maintenance work; high rates of criminal activity; and hundreds of families living in dangerous, substandard conditions. According to the revitalization plan, the Housing Authority of Kansas City, Missouri, would use their $13 million HOPE VI revitalization grant to fund portions of several redevelopment projects. The majority of the grant would fund the rehabilitation of 75 units at the Theron B. Watkins site. (Other public housing funds would be used to complete the rehabilitation of the remaining units.) Additionally, some of the HOPE VI funds would be used to rehabilitate 74 townhomes at the housing authority’s Wayne Miner site. Finally, the funding would be used to demolish 24 units at Theron B. Watkins. These units would be replaced in two off-site communities. Of the $13 million revitalization grant, $1.4 million was budgeted for community and supportive services. The funds would be used to provide case management, community policing, and programs and activities. An additional $314,000 would be used to renovate the housing authority’s family development center. Of the 173 total planned units, 149 have been completed. The rehabilitation of 75 units at the Theron B. Watkins site is complete (see fig. 27), as is the renovation of the family development center. The rehabilitation of the 74 townhomes at the Wayne Miner site was completed in March 2003. The replacement of the 24 demolished units in two, off-site, mixed-income developments remains in the planning stage. However, due to recent tax credit awards, construction on 13 of the 24 replacement units is scheduled to begin in June 2003. Community and supportive services for residents of Theron B. Watkins include bilingual case management for the large immigrant population, community policing, transportation, public health programs, and youth development activities. The housing authority recently conducted a needs assessment of its residents, which demonstrated the residents’ preference for case management. Services for children are offered at an on-site community center, including Head Start, Parents as Teachers, Boy/Girl Scouts, and the Police Athletic League. The housing authority had already begun the revitalization of Theron B. Watkins with other public housing funds when the fiscal year 1996 HOPE VI revitalization grant was awarded. Additionally, the receivership improved the management of the housing authority, which ensured that the authority had the staffing and expertise to implement its HOPE VI grants. Although the on-site renovation was completed by April 2000, the other two parts of the redevelopment effort have faced challenges. The housing authority’s initial HOPE VI application included the Wayne Miner site as a mixed-income development, but after an evaluation of financial feasibility and market demand, the housing authority decided that mixed-income development would not be sustainable at the site. Thus, the housing authority had to redo its plans for the site to include only public housing. The plans to replace the 24 demolished Theron B. Watkins units at two, off- site, mixed-income developments were delayed when the housing authority’s fiscal years 2001 and 2002 applications for low-income housing tax credits were denied. However, in early 2003, one of the two mixed- income developments was awarded tax credits, and construction is expected to begin in June 2003. The housing authority plans to reapply for tax credits for the other development in the fall of 2003. The Spartanburg Housing Authority was awarded a $14.6 million HOPE VI revitalization grant for Tobe Hartwell Courts and Tobe Hartwell Extension in October 1996 and has completed all of the planned public housing and homeownership units, a community center, and nearly half of the planned tax credit units (see fig. 28). Tobe Hartwell Courts and Tobe Hartwell Extension—constructed in 1941 and 1952, respectively—contained 266 units in concrete and masonry buildings (see fig. 28). High density, narrow streets, limited rehabilitation options, and general disrepair characterized the development. In 1996, incidents of crime were 19 percent higher at this development than crime in Spartanburg public housing in general, and nearly 40 percent of the residents did not have a high-school diploma. The housing authority was awarded a $400,000 HOPE VI planning grant for Tobe Hartwell Courts and Extension in May 1995. The total projected budget for the project is $30 million, which includes tax credit equity and private funds. The revitalization plans for Tobe Hartwell Courts and Tobe Hartwell Extension, renamed the Tobias Booker Hartwell Campus of Learners, call for 268 new units to be developed in the following four phases: Phase one: 118 public housing replacement units and a community center on the original site. Phase two: 50 single-family homes on two off-site locations. Phase three: 50-unit, off-site apartment complex (40 low-income housing tax credit units and 10 public housing units). Phase four: another 50 low-income housing tax credit off-site units. Of the $14.6 million in HOPE VI funds, approximately $803,000 was set aside for community and supportive services. The community and supportive services plan, approved in May 1998, stated that case managers would administer the program and monitor residents’ progress. The community center would be the hub of the supportive services component and would include a day-care facility, a computer center, a clinic, meeting rooms, staff offices, and a combined gymnasium and multipurpose community room. The 118 replacement public housing units were completed in February 2001 and are now fully occupied (see fig. 28). All 50 homes are complete, 36 have been sold, and contracts are in place for 7. Of the 50 tax credit units planned for phase three, all have been constructed and accepted. Site infrastructure work is complete for phase four, and the housing authority is awaiting the 2003 low-income housing tax credit cycle to apply for building funds for this phase. A needs assessment of the residents was updated in January 2000, and provision of supportive services began in December 2000. The community center is complete, and the day-care and health-care components are fully operational. Classes are also under way in the computer lab, and case managers are on-site. Spartanburg Housing Authority officials believe that they have been successful for several reasons. First, receipt of a planning grant enabled the housing authority to thoroughly plan the revitalization. As a result of this early planning, the housing authority made few changes to their plans after the revitalization grant was awarded. Also, housing authority officials emphasized that they involved their residents early and often, enabling them to avoid the delays and difficulties that many other housing authorities have experienced. Moreover, housing authority officials emphasized that their previous executive director provided strong leadership and was the driving force behind the planning and implementation of their revitalization grant. The financing of this grant was relatively simple compared with the financing that other housing authorities must arrange to construct mixed- income developments. For example, the housing authority put all public housing units back on-site. In addition, in South Carolina, the state housing finance agency sets aside low-income housing tax credits for HOPE VI sites. This made it easier for the housing authority to obtain tax credits for its off-site components. In addition to those named above, Catherine Hurley, Kevin Jackson, Barbara Johnson, Alison Martin, John McGrail, Sara Moessbauer, Marc Molino, Lisa Moore, Barbara Roesmann, Paige Smith, Ginger Tierney, and Carrie Watkins made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading.
Congress established the HOPE VI program to revitalize severely distressed public housing. In fiscal years 1993 to 2001, the Department of Housing and Urban Development (HUD) awarded approximately $4.5 billion in HOPE VI revitalization grants. The Ranking Minority Member, Subcommittee on Housing and Transportation, Senate Committee on Banking, Housing, and Urban\ Affairs, asked GAO to examine HUD's process for assessing grant applications, the status of work at sites for which grants have been awarded, and HUD's oversight of HOPE VI grants. HUD has generally used the same core rating factors to assess HOPE VI grant applications--need, capacity, quality, and leveraging. However, HUD has, over time, increased the requirements that housing authorities must meet for each of these factors in order to make better selection decisions. Although authorities' historical program performance had been considered under various rating factors, it was not until fiscal year 2002 that past performance became a threshold requirement that an applicant must meet to be eligible for a grant. The status of work at HOPE VI sites varies greatly, with construction complete at 15 of the 165 sites. As of December 31, 2002, grantees had completed 27 percent of the total planned units and spent approximately $2.1 of the $4.5 billion in HOPE VI revitalization funds awarded. However, the majority of grantees have not met their grant agreement deadlines. For example, the time allowed for construction has expired for 42 grants, yet grantees completed construction within the deadline on only 3 grants. Several factors affect the status of work at HOPE VI sites, including the development approach used and changes made to revitalization plans. HUD's oversight of HOPE VI grants has been inconsistent, due partly to staffing limitations and confusion about the role of field offices. Both headquarters and field office staff are responsible for overseeing HOPE VI grants. However, HUD field offices have not systematically performed required annual reviews. Additionally, despite grantees' inability to meet key deadlines, HUD has no formal enforcement policies. Instead, the agency determines if action should be taken against a grantee on a case-by-case basis. Although HUD has declared 9 grants to be in default and issued warnings regarding 3 grants, it has not done so for other grants in a similar situation.
Although its effect on communities can be devastating, wildland fire is a natural and necessary process that provides many benefits to ecosystems, such as maintaining habitat diversity, recycling soil nutrients, limiting the spread of insects and disease, and promoting new growth by causing the seeds of fire-dependent species to germinate. Wildland fire also periodically removes brush, small trees, and other vegetation that can otherwise accumulate and increase the size, intensity, and duration of subsequent fires. Wildland fire occurs in various combinations of frequency and severity, from low-severity events that return every few decades to high-severity fires that occur once every 200 years or more. Over the past century, however, various management practices—including fire suppression, grazing, and timber harvest—have reduced the normal frequency of fires in many forest and rangeland ecosystems and contributed to abnormally dense, continuous accumulations of vegetation, which can fuel uncharacteristically large or severe wildland fires. The impacts of these fires have intensified as more and more communities develop in areas that are adjacent to fire-prone lands—the wildland-urban interface. Federal researchers have estimated that unnaturally dense fuel accumulations on 90 million to 200 million acres of federal lands in the contiguous United States place these lands at an elevated risk of severe wildland fire. The rapid urbanization of forested land in Colorado and Utah has raised concerns about the unhealthy condition of forests in those states and the potential for resulting wildland fires. These forests also have undergone insect and disease attacks of epidemic proportions, further weakening them and contributing to the abundance of fuels for wildland fires. For example, the mountain pine beetle epidemic now affecting the southern Rocky Mountains and other western areas has produced vast areas of dead and dying lodgepole pine forests in Colorado and Wyoming. In recent years, wildland fires in Colorado and Utah have increasingly threatened communities in the interface as well as watersheds (i.e., areas that are drained by rivers or other waterways) that provide water to populated areas in or near forests. The U.S. Forest Service and BLM are the primary federal agencies responsible for wildland fire management—together, they oversee about 450 million acres of forest and rangeland. These agencies take various steps to reduce hazardous fuels (fuel reduction) on wildlands, including mechanical treatments that use equipment to cut vegetation back to desired levels (thinning), planned low-level fires that burn small trees and underbrush (prescribed fire), herbicides that kill unwanted vegetation, animal grazing, or combined treatments that comprise one or more of these methods. Through these efforts, the agencies attempt to restore forest and rangeland ecosystems to their historical conditions and reduce the risk of severe wildland fires. Like their federal counterparts, some state forest services also have an important role in community fire prevention. Such agencies maintain crews that suppress wildland fires, conduct forest thinning and prescribed burns, advise local landowners on ways to build fire-resistant structures, and direct homeowners to local contractors who provide fuel reduction services. They also assist in the development of community wildland fire protection plans that set priorities for fuel reduction treatments and recommend specific strategies to reduce fire risk on public and private land. In addition to efforts to reduce the risk that wildland fires will occur, federal and state agencies take other steps to mitigate the impact of wildland fires. These steps include projects to stabilize damaged areas and rehabilitate them more quickly than would occur under natural conditions. Such projects involve activities such as planting native grasses, shrubs, and trees; protecting waterways from erosion that could introduce sediment into municipal water supplies; and restoring habitat for local fish and wildlife populations. Attempts at widespread fuel reduction and postfire rehabilitation in the wildland-urban interface can be frustrated by the diverse mixture of property ownership typically found in this region. A single forest area may contain tracts of land that are publicly owned, such as national forests and state parklands, as well as tracts that are controlled by a multitude of private owners. This mixed-ownership setting creates the potential for individual pockets of untreated land to exist within a project area if some property owners do not want to join the effort. For example, U.S. Forest Service efforts to treat national forest land may be impeded if access to these areas is dependent upon consent from private property owners. Access to national forest land may also be limited if the project site falls within an area where road construction is restricted. In such instances, areas left untreated can diminish the effectiveness of the overall project. Even if the U.S. Forest Service wanted to join the project, a separate contract with the vendor—containing separate requirements for contract performance—would typically be necessary. Beginning in 1998, the U.S. Forest Service and CSFS began exploring ways to manage land across ownership boundaries, particularly in wildland- urban interface areas. The two forest services agreed that management activities such as fuel reduction should be undertaken only where community interest and support exists, and, thus, these activities would be driven largely by state, local, and private projects. To facilitate this work, they determined that it would be useful for Colorado state foresters to serve as agents of the U.S. Forest Service for the purpose of conducting projects on federal lands immediately adjacent to state, local, or private lands where similar work was under way. Colorado’s foresters would be authorized to mark boundaries, designate trees for removal, and administer other project activities—including sales of designated trees in the project area—to reduce fuel risk on federal lands as a complement to similar activities on adjacent lands. Because of the collaborative nature of these projects, the proposed program became known as “Good Neighbor.” In the Department of the Interior and Related Agencies Appropriations Act, 2001, Congress established the program, authorizing the U.S. Forest Service to allow its state counterpart in Colorado to perform forest, rangeland, and watershed restoration services, such as fuel reduction or treatment of insect-infected trees, on national forest lands. The services provided by the state, either directly or through contracts with private vendors utilizing state contracting procedures, were permitted when similar and complementary activities were being performed on adjacent state or private lands. According to the subsequent agreement signed by representatives of the two forest services, the following benefits were anticipated from Good Neighbor authority: national forest, state, and private lands would be at less risk from fuel treatments would provide defensible space for firefighters to occupy while combating fires moving from forests to developed areas, or vice versa; an impediment to cross-boundary watershed restoration activities would be removed, resulting in greater protective and restorative accomplishments; and CSFS and the U.S. Forest Service would demonstrate cooperation as encouraged in the National Fire Plan, the federal government’s wildland fire management strategy. Congressional reauthorization of Good Neighbor authority in 2004 added BLM areas in Colorado to the authority’s scope. In addition, the 2004 legislation authorized the U.S. Forest Service to work with Utah’s forest service to perform similar watershed restoration and protection projects in Utah. Unlike the authorizing legislation for Colorado, however, there was no provision in the authorizing legislation for Utah requiring Good Neighbor projects to correspond to similar and complementary activities under way on adjacent state or private lands. The U.S. Forest Service manages 11 national forests in Colorado, within the agency’s Rocky Mountain Region, and manages 7 national forests in Utah, within the Intermountain Region. Each national forest is divided into ranger districts that conduct or oversee “on-the-ground” activities. BLM lands in Colorado are managed by the Colorado State Office, which in turn oversees BLM field offices across the state. CSFS administers 17 districts throughout the state, each led by a district forester. UDFFSL, a unit of the state’s Department of Natural Resources, is divided into six areas, each administered by an area manager. Under Good Neighbor authority, 53 projects have been conducted in Colorado and Utah as of the end of fiscal year 2008 at a cost to the federal government of about $1.4 million. Colorado Good Neighbor projects focused on fuel reduction activities, such as tree thinning, mostly in the Colorado wildland-urban interface. In Utah, Good Neighbor projects focused on the repair of fire-damaged trails and watershed protection and restoration. In Colorado, 38 projects were conducted under Good Neighbor authority from fiscal year 2002, after the authority was granted, through fiscal year 2008. These projects primarily focused on fuel reduction. CSFS planned these projects in conjunction with the U.S. Forest Service or BLM, as well as private owners, and then contracted with private vendors or state crews to perform the work on U.S. Forest Service or BLM land. Of these 38 projects, 29 were on U.S. Forest Service land. These 29 projects included fuel reduction treatment on about 2,400 acres in 5 of the 11 national forests in the state—the Arapaho, Pike, Roosevelt, San Isabel, and San Juan National Forests––with 25 of the projects conducted in the Pike and San Isabel National Forests. The remaining 9 Good Neighbor projects occurred on forested BLM land covering about 100 acres in Boulder County. The number of acres being treated under individual Good Neighbor projects on U.S. Forest Service land in Colorado ranged from 1 acre to about 300 acres and on BLM land ranged from 2 acres to 21 acres. Figure 1 depicts the number of Good Neighbor projects in each of the Colorado national forests and BLM areas. Costs to the U.S. Forest Service for the 29 projects conducted on its land in Colorado have totaled about $679,000 through fiscal year 2008, while costs to BLM for its 9 projects in Boulder County have totaled $74,000 through the same time period. Individual project costs in Colorado varied, ranging from a low of $7,000 to a high of $233,000, depending on the number of acres treated and the type of work and equipment required. For example, one U.S. Forest Service district ranger stated that in a typical tree-thinning project, the contractor would pile and burn the cut branches and other thinned material (known as slash) resulting from the work, which is relatively inexpensive, but when the work is done in close proximity to homes, it usually requires more expensive treatments and means of disposal, such as mechanical grinding or chipping. In Colorado, Good Neighbor projects have been initiated as part of larger fuel reduction efforts being planned or conducted by the state on state, local, and private land in the state’s wildland-urban interface. The Good Neighbor project portion is usually smaller—in acres and cost—than the overall fuel reduction effort in a given area. For example, in the upper South Platte region, which includes portions of the Pike National Forest, CSFS has reduced fuels on thousands of acres in highly fire-prone areas on Denver Water land and other privately owned land after a severe fire in 1996 caused extensive sediment runoff into a primary Denver water source. However, a portion of these lands was adjacent or intermingled with Pike National Forest land, making it difficult to effectively treat the entire area without conducting work on federal land. According to CSFS officials, the state, as a result of the Good Neighbor authority, was able to contract with individual vendors to perform the work required on several hundred acres of the Pike National Forest as well as on private lands, thereby ensuring a seamless fuel reduction effort across Denver Water, private, and U.S. Forest Service lands. Figure 2 shows a slash pile on a fuel reduction project site in the Pike National Forest. In most Good Neighbor projects, the state either performs the services or contracts with vendors under a service contract; however, several projects in Colorado on U.S. Forest Service land were conducted under timber sale contracts in which fuel reduction projects aimed at thinning forests are structured as timber sales. Acting through Good Neighbor authority, state foresters sold the timber to professional loggers or, in some cases, to residents of adjacent subdivisions who used it for firewood. Instead of having to pay fuel reduction contractors to remove the timber, the U.S. Forest Service received a small amount of sale revenue from the state and paid only for the state forester’s administration of the sale. Of the 29 Good Neighbor projects the CSFS has conducted in Colorado on U.S. Forest Service land, 15 were conducted in the San Isabel National Forest and 1 was conducted in the Pike National Forest using timber sale contracts. Through these timber contracts, about 345,000 cubic feet of timber has been harvested and sold as of September 30, 2008, for a total of about $19,000. According to CSFS officials, the amount received for the timber is relatively small because the ponderosa pine, lodgepole pine, and mixed conifer timber primarily found in the Pike and San Isabel National Forests is small and of low value, as is timber in much of the rest of Colorado, in part, because of limited markets for timber. In addition, in 3 of the 13 Good Neighbor projects that involved service contracts on U.S. Forest Service land, timber sales were included as part of the service contract, rather than in a separate timber sale contract. As an incentive to attract bidders for these projects, timber harvested during fuel reduction was permitted to be removed from the forest and sold to local mills, rather than cut and piled on-site. Because prospective bidders contemplated the value of this timber in their bids, the cost of the resulting service contract was likely lower than it would have been without the incentive. For 2 of these projects, 1 located in the San Juan National Forest and the other located in the Arapaho National Forest, the total volume and value of included timber was 278 CCF for $1,378 and 1,312 CCF for $5,472, respectively. In Utah, 15 projects have been conducted under Good Neighbor authority from fiscal year 2005, when the authority was enacted, through fiscal year 2008. All of the projects in Utah have been conducted in one national forest––the Dixie National Forest—which is in the southern part of the state. According to a U.S. Forest Service Intermountain Region official, U.S. Forest Service ranger district officials in the Dixie National Forest and UDFFSL Southwestern Area officials have historically had a good relationship with each other and thought Good Neighbor projects could be beneficial to both. As a result, U.S. Forest Service officials in this district decided to use Good Neighbor authority to conduct several projects that they had originally planned to undertake themselves. Figure 3 depicts the national forests located in Utah and the number of projects undertaken in the one forest that has used the authority. The types of Good Neighbor projects in Utah are more diverse than those in Colorado. Unlike Colorado, where the projects are generally driven by overall state fuel reduction initiatives, in Utah, the U.S. Forest Service initiates projects and then obtains the assistance of UDFFSL to perform work on national forest land. According to Utah state officials, of the 15 Good Neighbor projects conducted in Utah, only 2 projects were fuel reduction-related, where state crews burned piles of brush and slash on over 300 acres near adjacent private lands to assist the U.S. Forest Service in a larger fuel reduction project in the forest. Of the remaining 13 projects, 8 involved using state crews or contractors to rehabilitate burned areas following wildland fires, including activities such as repairing and constructing fences, cleaning impoundments used by cattle and wildlife, and reconstructing forest trails. In the 5 remaining Good Neighbor projects, the state used crews or contractors to protect the watershed from erosion and sediment runoff by, for example, rehabilitating trails used by all-terrain vehicles and transportation and placement of large barrier rocks on either side of a roadway near public campsites to prevent vehicles from traveling off-road and damaging forest resources (see fig. 4). Costs to the U.S. Forest Service for these 15 projects have totaled about $674,000 through fiscal year 2008. As in Colorado, costs varied depending on the type of work and equipment provided. For example, project costs ranged from $1,500 for a pile burning on a few acres to $174,000 for replacement of existing culverts—large pipes that allow natural waterways to flow under road crossings—with new structures that will improve the forest watershed by facilitating the passage of trout and other fish species. State procedures are used for projects that involve service contracts, which include most Good Neighbor projects to date, while projects that include timber sales incorporate both state and federal requirements. We examined both states’ contracting requirements concerning three fundamental principles of government contracting—transparency, competition, and oversight—and found that state requirements generally address each of these areas. The U.S. Forest Service and CSFS are currently updating their Good Neighbor timber sale procedures, however, to make certain that timber sales conducted under the authority include all protections that federal officials believe are necessary when dealing with federal timber. Neither BLM in Colorado nor the U.S. Forest Service in Utah has developed written procedures for conducting Good Neighbor timber sales, primarily because neither agency has sold timber under the authority. However, such procedures could help ensure accountability for federal timber if these agencies conduct such sales in the future. State procedures generally govern Good Neighbor projects that involve service contracts, which include 37 of the 53 Good Neighbor projects to date. Good Neighbor projects are initiated under the authority of Good Neighbor agreements between each state and federal agency, which describe at a high level the authority and responsibilities of each agency in conducting projects, including the project’s planning, design, preparation, contracting, and administration. For those projects involving service contracts, Good Neighbor agreements allow the states to use their own procedures to enter into contracts with vendors that provide services, such as fuel reduction, in conducting forest restoration projects, or the states may use their own crews to carry out the work. Under the agreements, however, the U.S. Forest Service and BLM retain certain authorities when Good Neighbor projects are conducted. For example, for projects carried out on their respective lands, the U.S. Forest Service and BLM remain responsible for ensuring that the requirements of NEPA are satisfied. Once NEPA requirements are satisfied and project planning is completed, the state and federal agencies develop a task order for each project, detailing its objectives and cost. The state can then proceed with procuring the needed services using its own procurement and contracting process. In Colorado, the Colorado State University (CSU) administers procurement and contracting for all CSFS service contracts, including those for Good Neighbor projects. In Utah, procurement and contracting for service contracts are administered and approved by either UDFFSL or the Utah Division of Purchasing, depending on the size of the procurement. We examined CSU’s and the Utah agencies’ contracting requirements concerning three fundamental principles of government contracting— transparency, competition, and oversight. Specifically, we examined each state agency’s procurement rules concerning the following practices: soliciting contracts through public notice, with reasonable time allowed for potential vendors to develop and offer their bids; ensuring competition, except in cases where there are legitimate extenuating circumstances, such as projects for which there is only one responsive bidder; using simplified acquisition procedures for contracts whose dollar value is below a specified amount; awarding contracts to the lowest-priced vendor when evaluating competing offers, and requiring justification when any additional criteria, such as past performance, are used; giving preference to small businesses when awarding contracts; avoiding the awarding of contracts to private vendors for the performance of inherently governmental functions, such as budgeting and hiring; including worker protection provisions in state contracts; conducting orientation conferences with vendors at project sites; and providing for ongoing quality control, and requiring the government to conduct quality assurance inspections to determine whether the vendor is fulfilling the contract. In our analysis, we found that the state agencies’ contracting and procurement requirements generally address each of these areas. We discuss five of these areas in the following text: We found that agencies in both states provide a reasonable amount of time to advertise and receive bid proposals as well as provide competition among vendors. In CSU procurements, for example, contracts for services that will cost between $25,000 and $150,000 are generally advertised on Colorado’s Internet bidding system for not less than 3 days—to allow vendors time to develop and offer their bids. CSU provides additional requirements for procurements relating to CSFS forest-related work, allowing a minimum of 14 days for vendors to submit a bid regardless of the type of procurement because vendors that may be interested are often in the field conducting forest-related work and may not see the advertisement for several days. Services that will cost less than $25,000 are left to the discretion of the purchasing agent, who may advertise the bid or solicit vendors via telephone to determine whether they are interested. According to CSU procurement officials in Colorado, competition is generally promoted, except in two circumstances: (1) when only one vendor is available, and the contract has to be awarded to that vendor, or (2) the service being obtained will cost less than $25,000, which allows the purchasing agent to obtain services through other state agencies, such as the Colorado Corrections Industry, without written justification, if a fair market price is obtained. Competition is similarly promoted in Utah, according to state contracting officials, but contracting officers in the state may use informal procedures to acquire services, if the services will cost less than $50,000. For example, to award a $30,000 service contract, the state’s centralized Division of Purchasing may solicit telephone bids from three known vendors, then select one of the three vendors. This would not be an acceptable amount of competition for acquisitions exceeding $50,000, which would require the invitation to bid to be disseminated via the state’s Internet bidding system for a minimum of 10 days’ bidding time. Agencies in both states are generally required to award a contract to the lowest-priced bidder who meets the requirements set forth in the solicitation for bids for contracts, except in certain circumstances, such as when contracts are sizable enough to require a request for proposal—in which the state requires bidders to address additional criteria in their bids, such as technical requirements—or when strong justifications for not choosing the lowest bidder can be documented by the contracting officer. Additional requirements are imposed by the state agency to ensure that contracts are awarded to reputable contractors. For example, contract terms and conditions in both states require contractors to certify that they have not been debarred, suspended, or proposed for debarment by any governmental department or agency. In addition, for all proposed contracts that are federally funded, including Good Neighbor contracts, CSU purchasing agents search for prospective vendors’ names on the General Services Administration’s Excluded Parties List System, which is a database for obtaining information on parties that are excluded from receiving federal contracts, certain subcontracts, and certain federal financial and nonfinancial assistance and benefits. If a prospective vendor is on the list, the CSU purchasing agent will not consider this vendor’s bid, even if it is the lowest priced. According to state agency officials, the procurement policies of agencies in both states encourage contracting for services with small or disadvantaged businesses, although there are no specific set asides for small or disadvantaged businesses in either state. A CSU official stated that CSU promotes such businesses through a small business program, and that about 90 percent of CSFS contracts, including contracts for Good Neighbor projects, go to small businesses. However, attaining this percentage is not a requirement, according to this official, but simply results from the fact that types of work required in forest restoration projects, such as fuel reduction, are typically performed by small businesses. A Utah Division of Purchasing official stated that, although set asides are not required in Utah, the state will incorporate them into any procurement if the federal government requires set asides as the condition of a particular grant or contract under which the procurement is conducted. The official added that the federal government, rather than the state, is ultimately responsible for determining whether contracts are awarded pursuant to federal requirements for small businesses. According to state agency officials in both states, contractors are generally required to have liability insurance. In addition, the state agencies incorporate federal worker protection provisions into state contracts as requested by federal agencies. For example, the Migrant and Seasonal Agricultural Workers Protection Act is a federal law that applies to migrant and seasonal agricultural workers, including at least some forestry workers. While none of the agencies we reviewed specifically requires that the act’s provisions be explicitly included in state contracts, procurement officials in both states said that they include such federal provisions if they are conditions of grants or are otherwise stipulated in federal-state agreements. The responsibility for monitoring contract performance—through activities such as project site visits to ensure satisfactory work and a quality assurance inspection at the job’s completion—is largely left to the state forest services’ project managers in the field. However, both states use contract mechanisms to ensure that a vendor’s performance meets government standards, including performance bonding and requirements that contractors operate within an agency-approved scope of work. In reviewing state requirements concerning transparency, competition, and oversight, we compared selected state procurement and contracting requirements with those in the Federal Acquisition Regulation (FAR), which governs federal procurement activities, as well as specific U.S. Forest Service and BLM procurement guidance, and found them to be generally comparable. For example, we reviewed FAR provisions on (1) publicizing contract actions, which can include the establishment of a minimum bidding period that gives potential vendors a reasonable opportunity to respond; (2) competition requirements, which includes a requirement to provide for full and open competition through the use of competitive procedures, but allows for an exception to these procedures in limited circumstances, for example, if there is only one suitable vendor; and (3) quality assurance, which details several mechanisms—including inspection requirements and contract clauses—for maintaining project oversight and ensuring that the government receives quality work. Although we did not analyze all portions of the FAR, our broad comparison suggests that state and federal procurement requirements are generally similar in the areas we examined. While most Good Neighbor projects are carried out through service contracts, certain CSFS districts in Colorado, such as the Salida District, also use timber sale contracts to conduct fuel reduction projects when the project is expected to involve the harvest of merchantable timber. In such cases, CSFS Good Neighbor timber operating procedures incorporate certain federal requirements beyond those used for ordinary state timber sales, to ensure proper oversight of, and accountability for, state removal of federal timber. For example, the following additional project requirements are included in Good Neighbor timber sale operating procedures: state foresters should determine the timber sale volume, using standard federal tree sampling methods; state foresters should work with the local U.S. Forest Service ranger district to develop a sale appraisal and determine a minimum bid price; and project sites with total timber sale volume greater than 25 CCF or values greater than $5,000 should be marked with U.S. Forest Service tracer paint to identify trees to be cut and boundaries around the area in which cutting is to take place. CSFS officials, in conjunction with U.S. Forest Service regional officials, developed these CSFS timber operating procedures in 2007, in response to confusion over the requirements governing timber sales. When Good Neighbor authority was first being used, general operating procedures were contained in the master agreements, but no specific operating procedures existed, and some CSFS district officials were unsure about, or unaware of, certain requirements that needed to be followed as part of conducting a timber sale on federal land. We reviewed the provisions in the initial timber sale contracts that CSFS administered under Good Neighbor authority and found that they were not as extensive as standard U.S. Forest Service timber sale contracts. For example, the state Good Neighbor timber sale contracts did not specifically require the contractor to consider additional activities associated with the project, such as road maintenance, and did not include information about whether threatened and endangered species were in the project area. In addition, these contracts did not include detailed descriptions of the type and amount of timber sold. This type of information was included in standard U.S. Forest Service timber sale contracts. CSFS’s recent operating procedures address these issues, and a U.S. Forest Service Rocky Mountain Region official told us that CSFS has begun to include some of this information in more recent timber sale contracts—for example, CSFS included a clause addressing threatened and endangered species in a recent timber sale contract. Some U.S. Forest Service officials told us, however, that they remain concerned about certain aspects of timber sales conducted under Good Neighbor authority. Accordingly, the U.S. Forest Service and CSFS are drafting additional timber procedures that were not addressed in the initial procedures in 2007. These revised operating procedures add or revise procedures that identify federal and state roles in Good Neighbor timber sales from the initial NEPA documentation through the sale and subsequent harvesting of national forest timber. For example, the agencies are considering adding procedures for better accountability of timber sales by outlining the necessary information that needs to be included in the U.S. Forest Service’s Timber Information Management System, a system which tracks all information connected with each federal timber sale from its inception to completion. These provisions are currently in draft form, but when finalized will be considered joint operating procedures for both agencies. In addition, the U.S. Forest Service has already begun initiating some changes to the timber sale contract requirements in its latest Good Neighbor projects. According to a U.S. Forest Service official, one important change to the procedure is that national forest timber will be considered as “sold” first to the state, which in turn will sell it to the private contractor. According to a CSFS official, project task orders for timber sale contracts will clearly specify any special U.S. Forest Service contract requirements that are the responsibility of the state, which in turn will hold the contractor accountable for meeting those requirements. With this change, the state will more clearly know what special additions they must make to their Good Neighbor timber sale contract for a particular project. State officials believe this change will improve the project administration by the state and the accountability for enforcing certain U.S. Forest Service requirements. For example, an October 2008 Good Neighbor timber sale contract between the state and the buyer includes a U.S. Forest Service stipulation resulting from the project’s federal NEPA analysis, specifically prohibiting logging work from December 1 through April 15, to avoid interfering with the winter range of big game animals, such as deer and elk. Under the new contracting provisions, the state is now responsible for enforcing this provision. We did not compare Colorado’s timber sale requirements with those of BLM, or Utah’s with those of the U.S. Forest Service, because neither BLM in Colorado nor the U.S. Forest Service in Utah has conducted timber sales under Good Neighbor authority to date, and neither BLM in Colorado nor the U.S. Forest Service in Utah has developed written procedures for doing so. According to a CSFS official, detailed operating procedures for BLM Good Neighbor projects have not been developed because CSFS’s experience with the agency—consisting of nine projects in Boulder County at a total cost of $74,000—has been too limited to justify spending time and resources in developing such procedures. In addition, a BLM official stated that if the agency decided to have CSFS conduct timber sales as part of its Good Neighbor projects, it would likely require CSFS to utilize a BLM timber sales contract on the basis of agency timber sale requirements, or work with CSFS to ensure that the necessary federal requirements were accounted for in a state timber sales contract. As for Utah, a senior UDFFSL official told us that there is no official UDFFSL timber sale contract or process, because neither UDFFSL nor its parent Department of Natural Resources is involved in the sale of state timber. Instead, this is the role of a separate state agency that administers real estate trusts granted to Utah at statehood and is not involved in Good Neighbor projects. UDFFSL has developed a sale contract template for private landowners to use when selling timber from their land to commercial loggers. If timber were sold as part of a Good Neighbor project in Utah, a senior UDFFSL official speculated that agency managers in the field might use this contract template in the absence of other guidance. Although the value of timber removed through Good Neighbor projects has been minimal, the agencies’ experiences in using the authority to sell timber have demonstrated the importance of having detailed Good Neighbor timber sale operating procedures. Such procedures can help ensure that officials in both the federal and state agencies understand each agency’s roles and responsibilities and can help provide the guidance necessary to ensure proper accountability for federal timber. Should BLM in Colorado or the U.S. Forest Service in Utah decide to undertake timber sales through Good Neighbor authority in the future, or should the authority be expanded to include other states where such timber sales might occur, both federal and state agencies would benefit from written procedures detailing each party’s responsibilities in conducting Good Neighbor project timber sales. Federal and state officials who have participated in Good Neighbor projects cited project efficiencies as the authority’s primary benefit, including the ability to begin work more quickly and to reduce hazardous fuels across multiple ownerships with a single contract. The authority also provides a forum for federal-state cooperation that can aid other collaborative efforts, such as emergency wildland fire suppression. Challenges encountered by the agencies include federal and state officials’ incomplete understanding of how projects should be administered under the authority and concern about the adequacy of state contract procedures. Future use of Good Neighbor authority may benefit from documentation of agency experiences in using the authority to date, particularly since stakeholders told us that the authority’s chances for success in other states hinge on several factors, including the structure, staffing levels, and workloads of other state forest services, as well as the characteristics of those states’ federal lands. Federal and state officials who have used Good Neighbor authority cited project efficiencies as its primary benefit. The efficiencies cited include an ability to begin work more quickly, in part because the Colorado and Utah state forest services have established relationships with local communities and in part because state contracting procedures are considered to be simpler than federal procedures. In Colorado, for example, CSFS’s mission includes a mandate to assist local property owners with forest management on their lands. The state agencies’ resulting familiarity with local communities extends to knowledge of local vendors that offer services such as fuel reduction, which—combined with the states’ simplified contracting procedures—can shorten the time required to identify a suitable contractor, secure a contract, and begin work. According to one U.S. Forest Service district ranger in Colorado, this type of local-level coordination with private landowners and local contractors is not a specialty of most ranger districts’ staff. The state foresters’ familiarity with local landowners also speeds implementation when access is required across private land to reach a project site on federal land—for example, when a project site is far from existing forest roadways but is near a network of private roads within a subdivision. In one such instance, the U.S. Forest Service needed to gain access through a private subdivision to treat a densely forested area in an adjoining national forest. As part of a Good Neighbor agreement, CSFS negotiated with the subdivision’s owners to gain access to the site so that a private contractor could begin work. Figure 5 shows a map of the project area. According to state officials, securing this access is often less time- consuming for state foresters because, as a result of their state agencies’ emphasis on local outreach, they are often better known in the community than their federal counterparts. Moreover, several state officials noted that the U.S. Forest Service sometimes attempts to secure a permanent easement across private land in this scenario, which is less likely to result in a landowner’s cooperation, instead of temporary access for the duration of a specific project. The ability to begin work more quickly can be important when Good Neighbor projects use funding that is only available for the remainder of the current fiscal year. (Good Neighbor projects do not receive dedicated funding; instead, projects are funded from a variety of accounts, including grant funds.) In certain cases, the U.S. Forest Service decides to fund a project for one field unit near the end of the fiscal year—for example, by shifting funds from another field unit that has no further fuel reduction activities to fund that year. Partnering with the state forest service through a Good Neighbor agreement that expedites contracting can allow the project to be started prior to the end of the current fiscal year. In some cases, according to federal and state officials, using state foresters to administer Good Neighbor projects increased the efficiency of federal activities because the state was willing to assume responsibility for project administration. For example, state foresters in Utah performed project management duties, such as locating responsible vendors, negotiating contracts, and processing payments to vendors as work progressed. According to Utah officials, the state forest service was willing to undertake these Good Neighbor project duties because the projects benefited shared watersheds, accomplished important work for communities, and had a positive impact on local economies. Similarly, Colorado’s state forest service administered Good Neighbor fuel reduction projects in the state’s South Platte district under an arrangement funded by Denver Water, which benefited from the resulting watershed protection. In other cases, because the state structured fuel reduction projects as timber sales, fees related to the state forester’s administration of these sales were the only costs to the federal government—there were no service costs. The use of Good Neighbor authority also increased the effectiveness of fuel reduction treatments in areas that include federal, state, and private ownership and helped to maximize the degree of wildland fire risk reduction per dollar spent on the project, according to agency officials. Arranging for a single vendor to perform the work across ownership boundaries increased the likelihood that forest treatment was conducted in a uniform way and avoided leaving untreated land parcels in the project area. According to one U.S. Forest Service official, the ability to treat land parcels under multiple ownerships is critical because fire “doesn’t know the boundary” between federal, state, and private forest land. Given the advantages of partnering with the state—including the ability to negotiate access agreements, find suitable vendors, utilize more nimble contracting procedures, and share some project management duties—the use of Good Neighbor authority allowed the agencies to accomplish more than they would have accomplished in the absence of the authority, according to officials with whom we spoke. As U.S. Forest Service officials in Utah noted, the ability to leverage state employees to work on national forest lands increases the number of initiatives that a manager can undertake. For example, the Cedar City ranger district in the Dixie National Forest enlisted the state to reconstruct an all-terrain vehicle trail running through the forest, adding a layer of gravel to prevent trail erosion from continuing to spread sediment into nearby wetlands. According to the project’s federal manager, U.S. Forest Service crews’ already heavy workload was one reason for giving the project to the state. A second reason was UDFFSL’s ability to employ a county road crew on the project that would do similar work on private portions of the trail that access nearby communities. In other cases, projects benefiting local communities may “fly under the Forest Service’s radar,” as one state official said. That is, due to their inaccessible location or relatively small size, the national forest portions of some fuel reduction projects may not have been part of the U.S. Forest Service’s annual work plan until the state proposed including the parcels in landscapewide projects being planned for state and private property. Using state crews and private companies to do this work has additional advantages. For example, state and federal officials in Utah said that employing seasonal state fire response personnel on Good Neighbor projects brings revenue to the state that allows it to maintain these personnel for a longer period, keeping them available for emergency fire response outside of the state’s peak summer fire season. On the other hand, having the state contract with private companies allows the skills needed for necessary work such as fuel reduction to develop within a community, increasing the number of potential vendors that are qualified to work with federal agencies in the future. In addition to creating project and agency efficiencies, the use of Good Neighbor authority provided a forum for collaboration between federal and state agencies that officials told us can increase the effectiveness of other cooperative efforts. For example, emergency suppression of wildland fires demands that agency officials be able to coordinate under tight time and resource constraints with representatives of many different governmental entities. According to federal and state officials, this coordination is made easier by past working relationships on collaborative projects, such as those conducted under Good Neighbor authority that develop familiarity and instill mutual trust. This collaboration is useful outside of emergency scenarios as well. Officials identified stewardship contracting—where agencies use other special contracting authorities, such as the exchange of timber for fuel reduction services, to meet community land management needs—as another initiative that can benefit from a shared history of cooperation on Good Neighbor projects. Federal and state agencies have also encountered challenges in using Good Neighbor authority, including a lack of understanding of the authority that has complicated partnerships between federal and state officials. In Colorado, several state foresters said that their initial attempts to interest their U.S. Forest Service counterparts in potential projects were hampered by the federal officials’ lack of familiarity with the authority. In some of these areas, projects were eventually undertaken, but confusion about roles and responsibilities made project implementation more difficult—especially for projects involving timber sales. In Utah, projects have been conducted by only one national forest, in partnership with one of the state’s six districts. State officials in two of Utah’s remaining districts reported that they have encountered a lack of awareness of the authority from their prospective federal counterparts, similar to the early years of Colorado’s Good Neighbor experience. Likewise, concern over the adequacy of state contracting procedures hampered the use of the authority. Some U.S. Forest Service officials in Colorado considered state timber sale procedures to be insufficient to protect federal interests and imposed additional contracting requirements on the state before agreeing to Good Neighbor projects. For example, in one Colorado district, a state forester’s agreement with the local U.S. Forest Service ranger district staff about how to proceed on an early Good Neighbor project was overruled by the ranger district’s regional management, which placed additional requirements on the project. The regional office did not approve of some of the state’s processes—such as the state’s appraisal of timber value, and the way the state’s timber contract was written—and asked that additional requirements be included to ensure that the state could account for any federal timber removed. This request resulted in two separate contracts with the project’s single vendor—one for work being done on U.S. Forest Service land, and a second for work being done on private land. In another state district, a state forester who coordinated Good Neighbor projects for two ranger districts on the same national forest found that project requirements in one ranger district were more rigorous than for those in the other. In the latter district, the ranger allowed the state forester to administer projects involving timber sales using state contracting processes; however, the ranger in charge of the first district required that the U.S. Forest Service have more involvement in administering that district’s sale, believing that the U.S. Forest Service’s timber sale procedures did a better job of holding contractors accountable for their project performance than did state contracting procedures. Some state and federal officials found the overlay of federal requirements burdensome, making them less likely to participate in Good Neighbor projects. In one state district where U.S. Forest Service regional management imposed additional federal requirements on early projects because of doubts about the sufficiency of state procedures, state officials expressed a reluctance to pursue future projects until differences between the federal and state approaches are resolved. A former U.S. Forest Service official involved in this district’s projects said that the region’s additional requirements were counter to Good Neighbor’s core philosophy of landscape-level management requiring one appraisal, one vendor, and one contract. A second federal official added that the timber involved in these projects was of such little value that the attempt to add additional time-consuming accountability procedures was not cost-effective. State officials in the district and their federal counterparts have not pursued additional Good Neighbor projects to date, but state officials noted that timber sale procedures have been streamlined in the years since they experienced their early difficulties. In addition, the CSFS official in charge of coordinating Good Neighbor projects for the state said that the cumbersome administrative process imposed by both CSU and the U.S. Forest Service has effectively eliminated the use of Good Neighbor authority for small-scale projects in Colorado, frustrating an important original intent of the program. This process often makes such activities—for example, allowing an individual landowner to expand fuel reduction treatments onto U.S. Forest Service land to remove insect-infested trees or to establish an adequate defensible space for improved wildland fire protection—too burdensome and time- consuming to pursue. According to the CSFS official, both CSFS and U.S. Forest Service timber staff have recognized the need to streamline the task order approval process to address this problem if Good Neighbor authority is extended. According to several federal and state officials, a lack of detailed guidance in the early years of using Good Neighbor authority created confusion over the respective duties of federal and state project participants. In Colorado, federal and state officials issued general project guidelines as part of their Good Neighbor master agreement that addressed general operating procedures, but did not provide specific project-level direction— particularly concerning the use of timber sales in fuel reduction projects. As we have previously mentioned, more detailed guidance specifically addressing timber sales was issued in 2007, as a result of lessons learned from projects involving such sales and in recognition of the fact that sale procedures being used in some ranger districts differed from those used in others. These procedures are now being revised by the U.S. Forest Service and CSFS to address unresolved issues, such as how Good Neighbor timber sales should be reported in the U.S. Forest Service’s performance and financial tracking systems. The revised guidance—now in draft form— also includes additional timber accountability procedures. In Utah, U.S. Forest Service and state officials agreed on general project guidelines, but they have not issued more detailed guidance for project implementation, including instructions regarding timber sales. Although such instructions have not been needed to date because no timber sales have occurred under Good Neighbor authority in Utah, the area manager for the one state district where Good Neighbor projects have been conducted said that future projects may include timber sales. In addition, an area manager in another Utah district said that he had approached his U.S. Forest Service counterpart with a fuel reduction project proposal involving a timber sale. There is no official guidance that encompasses BLM’s Good Neighbor project responsibilities on BLM land in Colorado, in part because there have been few projects on BLM land. The nature of Good Neighbor authorization and funding posed a challenge in some districts. Federal and state officials in Utah said that because Good Neighbor projects do not receive dedicated funding, money to conduct the projects instead comes from supplemental accounts, such as funding associated with the National Fire Plan. In the past, such funding has arrived several months or more after the beginning of the federal fiscal year. This shortens the project window for fuel reduction work, which can be especially problematic for projects involving pile burnings or prescribed fire because such projects must be completed outside of fire season, which can stretch from mid-May to mid-October in the Dixie National Forest. Other state officials agreed that the annual federal appropriations cycle—which included, for example, reauthorization of Good Neighbor authority in Utah for a period of just over 9 months in fiscal year 2008—makes long-term project planning more difficult, resulting in less Good Neighbor activity. Officials in federal and state districts where Good Neighbor projects have not been undertaken had various reasons for not using the authority. Some foresters said they had not seen opportunities for projects that fit Good Neighbor’s criteria, while others lacked staff or other resources. One national forest supervisor in Utah saw several advantages to using the authority, but he wanted to ensure that his own staff was fully utilized before giving work to the state. Conversely, a senior official in Utah said that some state foresters see little benefit in adding projects that benefit the U.S. Forest Service to their workload, unless they are compensated by the U.S. Forest Service for their associated project administration duties. Experiences with Good Neighbor authority in Colorado and Utah may provide insights for its potential expansion in those and other states. Specifically, federal, state, and other stakeholders identified several factors that affect Good Neighbor authority’s chances for success, including the structure, staffing levels, and workloads of state forest services and state purchasing staff, as well as the characteristics of those states’ federal lands. These stakeholders noted that while it is important to understand the successes and challenges of Good Neighbor authority’s use in Colorado and Utah when considering its expansion to other states, it is equally important to account for differences among states as well. One key difference is the structure and mission of state forest services: whereas these agencies in Colorado and Utah emphasize community forestry assistance, other states may have different priorities reflecting differences in their history, geography, or institutional framework. For example, the Idaho Department of Public Lands manages its state’s forest resources to maximize the revenue from these resources and other state lands through activities such as timber harvesting, livestock grazing, and commercial building, according to a senior department official; the revenue generated from most of these trust lands supports the state’s public schools. Though this official could see advantages to having Good Neighbor authority in Idaho, such as the ability to conduct uniform land management practices across broader areas, she said she would be wary of any activities that would divert her agency from its primary mission of managing the state’s trust lands. A representative of an environmental group in Idaho told us that state forest management practices—such as the focus on timber harvesting—could lead to competing priorities if the state manages Good Neighbor projects on behalf of the federal government. To avoid this, he suggested that roles and responsibilities should be clearly defined at the outset for both federal and state participants if Good Neighbor authority is extended to Idaho. Moreover, a U.S. Forest Service official in Colorado who had previously worked for the relatively small forest service in a nearby state said that Good Neighbor’s effectiveness in other states would depend on their capacity to implement the agreements and monitor the projects within their staffing resources and workload, and that he did not think state forest services with limited resources would be able to handle a Good Neighbor project workload comparable to Colorado’s. There may also be differences in the federal and state forest services’ relationship, the strength of which is a major determinant of Good Neighbor project success, according to numerous federal and state officials with whom we spoke in Colorado and Utah. Another major difference among states is the value of timber on their lands. While fuel reduction projects undertaken thus far under Good Neighbor authority have generally harvested low-value trees in a depressed timber market, project sites in other areas of Colorado and Utah, or in other states, may contain more valuable timber. Fuel reduction projects carried out under Good Neighbor authority in those areas, especially those involving timber sales, would likely attract more federal timber sale oversight, and might likewise attract additional scrutiny from environmental stakeholders concerned that projects were being undertaken for their timber value, rather than for ecological necessity. For example, representatives of one environmental group in Colorado told us they did not have concerns about Good Neighbor projects conducted to date, but stated that this is in part due to the low timber value in the state—saying “there’s little worry here because there’s so little [timber value] at stake.” These representatives noted that their level of scrutiny would likely be much higher if Good Neighbor projects were conducted in timber-rich areas. Differences in the authorizing legislation for Colorado and Utah have led to differences in the types of projects conducted under Good Neighbor authority, which could lead to divergent outcomes if the authority is extended to other states. According to their Good Neighbor authorizing legislation, the U.S. Forest Service and BLM in Colorado may permit CSFS to perform watershed restoration activities on federal lands when the agency is carrying out similar and complementary activities on adjacent state or private lands. This has generally resulted in fuel reduction projects that take place near state or private boundaries, where nonfederal fuel reduction efforts had already occurred or were under way. In Utah, however, the authorization requires neither that the projects be part of a broader effort nor that they be adjacent to nonfederal lands. In practice, this less restrictive standard has led to a wider array of projects in Utah, such as the culvert replacement, barrier rock installation, and trail reconstruction undertaken in the Dixie National Forest. Moreover, to ensure the support of the public and environmental groups in Colorado, Utah, or other states, several stakeholders suggested that projects be undertaken only in the wildland-urban interface, where the potential public benefit is the greatest, rather than in more remote reaches of U.S. Forest Service and BLM lands. Also, according to federal and state officials as well as representatives of environmental groups, environmental stakeholders should be kept informed during Good Neighbor project design, and should be encouraged to participate during the NEPA process. Officials in Colorado and Utah told us they have done so on Good Neighbor projects to date, and they believe that this practice has been responsible for the general lack of opposition to Good Neighbor projects from members of the environmental community. Differences among states in the structure of their forest services, the value of their timber, and the potential content of their authorities, as well as the successes and challenges encountered in using Good Neighbor authority in Colorado and Utah, would be worth considering for agency officials contemplating future use of the authority—whether in Colorado, Utah, or other states. Although CSFS has prepared periodic summaries of Good Neighbor operations in Colorado, future users of the authority would benefit from a more systematic and comprehensive documentation of agencies’ experiences in conducting projects under Good Neighbor authority. While the agencies are not required to develop such documentation as part of their use of Good Neighbor authority, doing so could benefit future users—by, for example, providing them with an analysis of cost savings or other efficiencies and benefits that have been achieved through Good Neighbor’s use, and discussing the types of projects in which the authority has been most successful. In addition, as they have done for stewardship contracting, the agencies could disseminate this information through agency Web sites and handbooks and incorporate it into existing training to ensure that future users have access to the information. Without such information, agency officials will need to independently assess which projects would best be conducted using the authority, and the extent to which individual projects might reduce costs or lead to other efficiencies and benefits. With the aid of this information, federal and state officials in Colorado and Utah, and potentially in other states, could consider adopting those procedures that have worked well and avoid the early pitfalls experienced in applications of Good Neighbor authority. Given the state of our nation’s forests, and in light of our nation’s long- term fiscal constraints, land management agencies are seeking to enhance their effectiveness in improving forest conditions and helping prevent severe wildland fires. Good Neighbor authority can help this effort by allowing federal and state agencies to work more closely together to treat lands across ownership boundaries. The agencies have differed on how best to apply the authority, however, as evidenced by the variation in its use to date. In Colorado, Good Neighbor authority has been used by federal and state partners to work across multiple ownerships to increase the effectiveness of fuel reduction efforts, while projects in Utah have focused on watershed health and rehabilitation of burned areas on U.S. Forest Service land. These variations arise in part because of differences in the laws authorizing these states’ activities, and in part because of differences in how state and federal agencies collaborate on Good Neighbor projects—highlighting an important issue as projects proceed in Colorado and Utah, and as Congress and the agencies consider expanding the use of Good Neighbor authority. That is, the type of projects conducted under the authority, and the extent to which those projects enhance the effectiveness of agency land management efforts, depend on many state-specific factors, including the scope of the Good Neighbor authority under which the state operates, the laws governing the state’s contracting activities, and the characteristics of federal land targeted for treatment, particularly the value of any timber. Without procedures that ensure timber accountability in all states, however, and without understanding and benefiting from the lessons learned from past use of the authority, including the state-specific factors that influence the success of Good Neighbor projects, the agencies may fail to capitalize fully on the potential of Good Neighbor authority. We are making two recommendations to enhance the agencies’ use of Good Neighbor authority in Colorado and Utah as well as in states in which Good Neighbor projects may be authorized in the future. First, if U.S. Forest Service officials in Utah or BLM officials in Colorado decide to conduct timber sales under Good Neighbor authority, or if timber sales are pursued under expanded Good Neighbor authority in additional states, we recommend that the Secretaries of Agriculture and the Interior direct the agencies to first develop written procedures for Good Neighbor timber sales in collaboration with each state to better ensure accountability for federal timber. In doing so, the agencies may want to consult the U.S. Forest Service’s Good Neighbor timber sale procedures for Colorado. Second, we recommend that the Secretaries of Agriculture and the Interior direct the U.S. Forest Service and BLM, in collaboration with their state Good Neighbor partners, to document how prior experiences with Good Neighbor projects offer ways to enhance the use of the authority in the future and make such information available to current and prospective users of the authority. Specifically, the U.S. Forest Service should collaborate with Colorado and Utah, and BLM should collaborate with Colorado, to document information such as (1) the types of projects that have proven to be successful uses of the authority; (2) how differences in the authority’s scope within each state have affected project selection; (3) how project planning and implementation responsibilities have been divided among federal and state project partners; and (4) the costs and benefits associated with using Good Neighbor authority to conduct projects, including any project efficiencies and cost savings that have resulted from the authority’s use. In addition, to ensure that this information is available to current and future users of the authority, the agencies should develop a strategic approach for disseminating it—for example, through agency Web sites, handbooks, training, or other means. We provided the U.S. Department of Agriculture’s Forest Service, the Department of the Interior, CSFS, and UDFFSL with a draft of this report for review and comment. All four agencies generally agreed with the findings and recommendations in the report. The U.S. Forest Service noted, however, that it will address our recommendation about documenting experiences with Good Neighbor projects by providing our report to current and prospective users of the authority. While we are pleased that the U.S. Forest Service believes our report accurately documents lessons learned to date, we believe the agency will need to provide additional details if future users are to fully benefit from this information. For example, while our report includes a general description of the primary reasons for choosing Good Neighbor authority to conduct certain projects, it does not include a detailed discussion of the potential costs and benefits associated with this decision, which may prove beneficial to managers as they assess the applicability of the authority to future projects. The U.S. Forest Service’s written comments, along with our response, are presented in appendix II, Interior’s written comments are presented in appendix III, CSFS’s written comments are presented in appendix IV, and UDFFSL’s written comments are presented in appendix V. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees, the Secretary of Agriculture, the Secretary of the Interior, the Chief of the Forest Service, the Director of the Bureau of Land Management, and other interested parties. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Our objectives were to determine (1) the activities conducted under Good Neighbor authority, including the number, type, and scope of projects undertaken; (2) the federal and state guidance, procedures, and controls being used to conduct Good Neighbor projects, including contracting requirements and timber sale procedures; and (3) the successes, challenges, or lessons learned, if any, that have resulted from the use of Good Neighbor authority. Our review of Good Neighbor authority included obtaining documentation and holding meetings and discussions with the U.S. Department of Agriculture’s Forest Service and the Bureau of Land Management (BLM), the two agencies that have implemented Good Neighbor authority; the Colorado State Forest Service (CSFS) and the Utah Division of Forestry, Fire and State Lands (UDFFSL), the two state agencies that have conducted Good Neighbor projects; and Colorado State University (CSU) and the Utah Division of Purchasing, the agencies in each of these states generally responsible for administering service contracts for each state’s forest service. To determine the activities conducted under Good Neighbor authority in Colorado and Utah, we interviewed U.S. Forest Service, BLM, CSFS, and UDFFSL officials on their overall management of Good Neighbor projects, including how projects are chosen, the coordination involved between federal and state agencies, and the type and scope of projects that are undertaken. We also reviewed and analyzed specific data on Good Neighbor projects conducted through fiscal year 2008 that were provided by these officials, including the specific project objectives, location, start and completion dates, acreage involved, the federal cost share of the project, and the type of contract used in conducting the project—service or timber sale—as well as the amount and value of any timber removed from federal land. We also visited several completed or ongoing Good Neighbor project sites located on both U.S. Forest Service land and BLM land, including six sites in Colorado and four in Utah, to obtain an understanding of the type of work performed and type of equipment required to conduct projects. During our site visits, we also reviewed selected Good Neighbor projects’ contracting and financial files to obtain information on the planning, contracting, and monitoring processes each agency uses on Good Neighbor projects. We also obtained, through telephone interviews and e-mail, additional project information from U.S. Forest Service and state districts that we did not visit. We assessed the reliability of the project data we obtained by comparing a random sample of data provided to us by agency and state officials with similar information we had obtained directly from project files. We further assessed the reliability of timber sales data we obtained from the U.S. Forest Service’s timber sale accounting system by conducting telephone interviews with a U.S. Forest Service official responsible for entering data into the system, maintaining these data, and preparing reports using system data. In addition, GAO has previously assessed the reliability of data maintained in this system. To ensure that GAO’s previous assessment was still accurate, we confirmed with the U.S. Forest Service that the information previously obtained on the reliability of the system remained relevant. As a result, we believe that the data we obtained from this system were sufficiently reliable for our purposes in conducting this review. To determine the federal and state guidance, procedures, and controls used to conduct projects under Good Neighbor authority, including state contracting requirements and timber sale procedures, we obtained documentation on Colorado’s and Utah’s procurement and contracting processes for acquiring services from vendors, including the requirements of each state concerning three fundamental principles of government contracting—transparency, competition, and oversight. Specifically, we chose several of the states’ procurement rules related to these three areas to examine, and we interviewed procurement and contracting officials with CSU and the Utah Division of Purchasing to obtain additional information on how the states put these rules into practice when conducting Good Neighbor projects. We also compared these selected state procurement and contracting requirements with those in the Federal Acquisition Regulation, and with U.S. Forest Service and BLM procurement guidance, to identify similarities and differences. To identify the timber sale procedures being used in Good Neighbor projects, we interviewed U.S. Forest Service, BLM, CSFS, and UDFFSL officials to determine whether Good Neighbor timber sale operating procedures had been established and their composition. We reviewed joint guidance prepared by the U.S. Forest Service and CSFS on conducting Good Neighbor projects to determine the type and extent of requirements incorporated. We also compared federal and state timber sale contracts and interviewed timber sale officials with the U.S. Forest Service in Colorado to obtain their opinions on differences between the two types of contracts, as well as any resulting effects on federal timber sale accountability. Finally, to identify successes, challenges, and lessons learned that the federal and state agencies experienced using Good Neighbor authority, we interviewed U.S. Forest Service, BLM, CSFS, and UDFFSL officials who had participated in Good Neighbor projects to obtain their views on the successes and challenges associated with the authority, including the factors they believe contributed to these successes and challenges and the measures they believe could be taken in the future to overcome these challenges. For example, we interviewed officials from five U.S. Forest Service ranger districts as well as officials from five CSFS district offices who had participated in Good Neighbor projects. We also obtained opinions about Good Neighbor authority from several state officials who had not participated in Good Neighbor projects, as well as their reasons for not participating. To obtain information on the potential uses of Good Neighbor authority in other states, we asked federal and state officials familiar with Good Neighbor authority, as well as representatives from the National Association of State Foresters, to identify states they believed would be the best candidates for us to interview regarding potential use of the authority. From the states that they recommended, we selected Idaho, Oregon, and Wyoming. We then interviewed officials in those states to discuss their opinions on whether Good Neighbor authority would be successful in their states, the factors for success, and any concerns that they believed would need to be addressed. We also spoke with other interested parties, including representatives of six environmental groups— based in Colorado, Utah, and other western states—and two industry groups—one based in Washington, D.C., and the other based in South Dakota—to get their opinions on how well Good Neighbor authority was being implemented in Colorado and Utah, and the factors that would be important for success if Congress were to expand the authority to other states. We conducted this performance audit from June 2008 through February 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Following is GAO’s comment on the U.S. Department of Agriculture’s Forest Service letter dated February 13, 2009. 1. The U.S. Forest Service noted in its comments that it will address our recommendation about documenting experiences with Good Neighbor projects by providing our report to current and prospective users of the authority. While we are pleased that the U.S. Forest Service believes our report accurately documents lessons learned to date, we believe the agency will need to provide additional details if future users are to fully benefit from this information. For example, while our report includes a general description of the primary reasons for choosing Good Neighbor authority to conduct certain projects, it does not include a detailed discussion of the potential costs and benefits associated with this decision, which may prove beneficial to managers as they assess the applicability of the authority to future projects. As a result, we continue to believe it will be important for the U.S. Forest Service to systematically collect and document information on its experiences using Good Neighbor authority, and that this information should go beyond that contained in our report. In addition to the individual named above, Steve Gaty, Assistant Director; David Brown; and Greg Carroll made key contributions to this report. Cindy Gilbert, Rich Johnson, Alison O’Neill, Jena Sinkfield, and Bill Woods also made important contributions to this report.
In 2000, Congress authorized the U.S. Department of Agriculture's Forest Service to allow the Colorado State Forest Service to conduct certain activities, such as reducing hazardous vegetation, on U.S. Forest Service land when performing similar activities on adjacent state or private land. The Department of the Interior's Bureau of Land Management (BLM) received similar "Good Neighbor" authority in 2004, as did the U.S. Forest Service in Utah. Congress has also considered the authority's expansion to other states. GAO was asked to determine (1) the activities conducted under the authority; (2) the federal and state guidance, procedures, and controls used to conduct Good Neighbor projects; and (3) successes, challenges, and lessons learned resulting from the authority's use. To do so, GAO reviewed Good Neighbor project documentation and interviewed federal and state officials. Fifty-three projects were conducted under Good Neighbor authority through fiscal year 2008, including 38 in Colorado and 15 in Utah, with most of the projects (44 of 53) conducted on U.S. Forest Service land. These projects included hazardous fuel reduction on about 2,700 acres of national forest and about 100 acres of BLM land, mostly in Colorado, and the repair of firedamaged trails and watershed protection and restoration in Utah. Together, the two agencies spent about $1.4 million on these projects, split almost evenly between the two states. Although most projects involved contracting for services such as fuel reduction, some projects involved timber sales in which contractors purchased timber resulting from their fuel reduction activities. These timber sales occurred only in Colorado and totaled about $26,000. State procedures are used in conducting Good Neighbor projects that involve service contracts, while projects that include timber sales incorporate both state and federal requirements. Both Colorado and Utah have contracting requirements that generally address three fundamental principles of government contracting--transparency, competition, and oversight. For example, both states solicit competition among bidders and generally require service contracts to be awarded to the lowest-priced bidder meeting the contract criteria. State requirements were generally comparable to federal procurement requirements. When Good Neighbor projects involve timber sales, state procedures incorporate certain requirements that help the U.S. Forest Service account for state removal of federal timber. The U.S. Forest Service and Colorado are currently supplementing their joint Good Neighbor procedures to ensure that additional accountability provisions are included in future timber sale contracts. Neither BLM in Colorado nor the U.S. Forest Service in Utah has developed written procedures for conducting Good Neighbor timber sales, primarily because they have not sold timber under the authority. Such procedures could help ensure accountability for federal timber if future projects include such sales. Federal and state officials who have used Good Neighbor authority cited project efficiencies and enhanced federal-state cooperation as its key benefits. For example, the agencies cited their ability to improve the effectiveness of fuel reduction treatments in areas that include federal, state, and private ownership. Federal and state agencies have also encountered challenges such as a lack of understanding of the authority and complicated processes for approving Good Neighbor agreements. Agency officials and others also noted several factors to consider when conducting future Good Neighbor projects, whether in Colorado, Utah, or other states that may be granted the authority--including the type of projects to be conducted and the type of land to be treated. While the agencies are not required to document their experiences in using the authority, officials contemplating future use of the authority could benefit from such documentation--including information on successes, challenges, and lessons learned to date.
The U.S. financial regulatory structure is a complex system of multiple federal and state regulators as well as self-regulatory organizations (SRO) that operate largely along functional lines. That is, financial products or activities generally are regulated according to their function, no matter who offers the product or participates in the activity. The functional regulator approach is intended to provide consistency in regulation, focus regulatory restrictions on the relevant functional areas, and avoid the potential need for regulatory agencies to develop expertise in all aspects of financial regulation. In the banking industry, the specific regulatory configuration depends on the type of charter the banking institution chooses. Charter types for depository institutions include commercial banks, thrifts, and credit unions. These charters may be obtained at the state or federal level. The federal prudential banking regulators—all of which generally may issue regulations and take enforcement actions against industry participants within their jurisdiction—are identified in table 1. In addition, the Dodd-Frank Act created CFPB as an independent bureau within the Federal Reserve System that is responsible for regulating the offering and provision of consumer financial products and services under the federal consumer financial laws. Under the Dodd-Frank Act, at the designated transfer date, certain authority vested in the prudential regulators transferred to CFPB. The securities and futures industries are regulated under a combination of self-regulation (subject to oversight by the appropriate federal regulator) and direct oversight by SEC and CFTC, respectively. SEC oversees the securities industry SROs, and the securities industry as a whole, and is responsible for administering federal securities laws and developing regulations for the industry. SEC’s overall mission includes protecting investors; maintaining fair, orderly, and efficient markets; and facilitating capital formation. CFTC oversees the futures industry and its SROs. Under the Dodd-Frank Act, CFTC also has extensive responsibilities for the regulation of swaps and certain entities involved in the swaps markets. CFTC has responsibility for administering federal legislation and developing comprehensive regulations to protect the public from fraud and manipulation, to insure the financial integrity of transactions, and to reduce systemic risk in the marketplace. In addition, the Dodd-Frank Act created FSOC. FSOC’s three primary purposes are to identify risks to the financial stability of the United States, promote market discipline, and respond to emerging threats to the stability of the U.S. financial system. FSOC consists of 10 voting members and 5 nonvoting members and is chaired by the Secretary of the Treasury. In consultation with the other FSOC members, the Secretary is responsible for regular consultation with the financial regulatory entities and other appropriate organizations of foreign governments or international organizations. The federal government uses regulation to implement public policy. Section 553 of APA contains requirements for the most common type of federal rulemaking—informal rulemaking or “notice and comment” rulemaking. While there are inter- and intra-agency variations in the informal rulemaking process, federal financial regulators generally share three basic rulemaking steps or phases: Initiation of rulemaking action. During initiation, agencies gather information that would allow them to determine whether rulemaking is needed and identify potential regulatory options. To gather information on the need for rulemaking and potential regulatory options, agencies may hold meetings with interested parties or issue an advanced notice of proposed rulemaking. At this time, the agencies also will identify the resources needed for the rulemaking and may draft concept documents for agency management that summarize the issues, present the regulatory options, and identify needed resources. Development of proposed rule. During this phase of the rulemaking process, an agency will draft the notice of proposed rulemaking, including the preamble (which is the portion of the rule that informs the public of the supporting reasons and purpose of the rule) and the rule language. The agency will begin to address analytical and procedural requirements in this phase. The agency provides “interested persons” with an opportunity to comment on the proposed rule, generally for a period of at least 30 days. Development of final rule. In the third phase, the agency repeats, as needed, the steps used during development of the proposed rule. Once the comment period closes for the proposed rule, the agency either would modify the proposed rule to incorporate comments or address the comments in the final rule release. This phase also includes opportunities for internal and external review. As published in the Federal Register, the final rule includes the date on which it becomes effective. APA’s notice and comment procedures exclude certain categories of rules, including interpretative rules; general statements of policy; rules that deal with agency organization, procedure, or practice; or rules for which the agency finds (for good cause) that notice and public comment procedures are impracticable, unnecessary or contrary to the public interest. Under the Dodd-Frank Act, federal financial regulatory agencies are directed or have the authority to issue hundreds of regulations to implement the act’s provisions. In some cases, the act gives the agencies little or no discretion in deciding how to implement the provisions. For instance, the Dodd-Frank Act made permanent a temporary increase in the FDIC deposit insurance coverage amount ($100,000 to $250,000); therefore, FDIC revised its implementing regulation to conform to the change. However, other rulemaking provisions in the act appear to be discretionary in nature, stating that (1) certain agencies may issue rules to implement particular provisions or that the agencies may issue regulations that they decide are “necessary and appropriate”; or (2) agencies must issue regulations to implement particular provisions but have some level of discretion over the substance of the regulations. As a result, for these rulemaking provisions, the agencies may decide to promulgate rules for some or all of the provisions, and may have broad discretion to decide what these rules will contain and what exemptions, if any, will apply. In many instances, exemptions to Dodd-Frank Act provisions are encompassed in definitions of certain terms that are broadly established in statute and require clarification through regulation. Persons or entities that meet the regulatory definitions are subject to the provision, and those that do not meet the definitions are not. For example, CFTC and SEC promulgated a regulation that defined the terms ‘‘swap dealer,’’ ‘‘security- based swap dealer,’’ ‘‘major swap participant,’’ ‘‘major security-based swap participant,’’ and ‘‘eligible contract participant.’’ Persons that do not meet the definitions of these terms may not be subject to the Dodd- Frank Act provisions concerning swaps and security-based swaps, including registration, margin, capital, business conduct, and other requirements. Similarly, FSOC promulgated a regulation and interpretive guidance regarding the specific criteria and analytic framework FSOC would apply in determining whether a nonbank financial company could pose a threat to the financial stability of the United States. Financial firms that are not designated by FSOC, acting pursuant to the statutory standards, would not be subject to enhanced prudential supervision by the Federal Reserve. Federal agencies conducted the regulatory analyses required by various federal statutes for all 54 Dodd-Frank Act regulations that we reviewed. As part of their analyses, the agencies generally considered, but typically did not quantify or monetize, the benefits and costs of these regulations. As independent regulatory agencies, the federal financial regulators are not subject to executive orders that require comprehensive benefit-cost analysis in accordance with guidance issued by OMB. While most financial regulators said that they attempt to follow OMB’s guidance in principle or spirit, we found that they did not consistently follow key elements of the guidance in their regulatory analyses. We previously recommended that regulators should more fully incorporate the OMB guidance into their rulemaking policies. As part of their rulemakings, federal agencies generally must conduct regulatory analysis pursuant to the Paperwork Reduction Act (PRA) and the Regulatory Flexibility Act (RFA), among other statutes.RFA require federal agencies to assess various impacts and costs of their rules, but do not require the agencies to formally assess the benefits and costs of alternative regulatory approaches or the reason for selecting one alternative over another. In addition to these requirements, authorizing or other statutes require certain federal financial regulators to consider PRA and specific benefits, costs, and impacts of their rulemakings, as the following describes. CFTC, under section 15(a) of the Commodity Exchange Act, is required to consider the benefits and costs of its action before promulgating a regulation under the Commodity Exchange Act or issuing certain orders. Section 15(a) further specifies that the benefits and costs shall be evaluated in light of the following five broad areas of market and public concern: (1) protection of market participants and the public; (2) efficiency, competitiveness, and financial integrity of futures markets; (3) price discovery; (4) sound risk-management practices; and (5) other public interest considerations. Under the Consumer Financial Protection Act (Title X of the Dodd- Frank Act), CFPB must consider the potential benefits and costs of its rules for consumers and entities that offer or provide consumer financial products and services. These include potential reductions in consumer access to products or services, the impacts on depository institutions with $10 billion or less in assets, as directed by 12 U.S.C. § 5516, and the impacts on consumers in rural areas. RFA analysis, CFPB also must describe any projected increase in the cost of credit for small entities and any significant alternatives that would minimize such increases for small entities. In addition to the protection of investors, SEC must consider whether a rule will promote efficiency, competition, and capital formation whenever it is engaged in rulemaking and is required to consider or determine whether an action is necessary or appropriate in the public interest. SEC also must consider the impact that any rule promulgated under the Securities Exchange Act would have on competition. This provision states that a rule should not be adopted if it would impose a burden on competition that is not necessary or appropriate to the act’s purposes. 12 U.S.C. § 5481(6). The Electronic Funds Transfer Act (EFTA), as amended by the Dodd- Frank Act, requires the Federal Reserve to prepare an analysis of the economic impact of a specific regulation that considers the costs and benefits to financial institutions, consumers, and other users of electronic fund transfers. The analysis must address the extent to which additional paperwork would be required, the effect upon competition in the provision of electronic banking services among large and small financial institutions, and the availability of such services to different classes of consumers, particularly low-income consumers. However, like PRA and RFA, none of these authorizing statutes prescribe formal, comprehensive benefit and cost analyses that require the identification and assessment of alternatives. In contrast, Executive Order 12,866 (E.O. 12,866), supplemented by Executive Order 13,563 (E.O. 13,563), requires covered federal agencies, to the extent permitted by law and where applicable, to (1) assess benefits and costs of available regulatory alternatives and (2) include both quantifiable and qualitative measures of benefits and costs in their analysis, recognizing that some benefits and costs are difficult to quantify. According to OMB, such analysis can enable an agency to learn if the benefits of a rule are likely to justify the costs and discover which of the possible alternatives would yield the greatest net benefit or be the most cost-effective. In 2003, OMB issued Circular A-4 to provide guidance to federal executive agencies on the development of regulatory analysis as required by E.O. 12,866. The guidance defines good regulatory analysis as including a statement of the need for the proposed regulation, an assessment of alternatives, and an evaluation of the benefits and costs of the proposed regulation and the alternatives. It also standardizes the way benefits and costs of federal regulatory actions should be measured and reported. Of the federal agencies included in our review, only FSOC and Treasury are subject to E.O. 12,866. As independent regulatory agencies, the federal financial regulators—CFPB, CFTC, FDIC, the Federal Reserve, OCC, the National Credit Union Administration (NCUA), and SEC—are not subject to E.O. 12,866 and OMB’s Circular A-4. Of the 66 Dodd-Frank Act rules within our scope, 54 regulations were substantive—generally subject to public notice and comment under APA—and required the agencies to conduct regulatory analysis. These rules were issued individually or jointly by CFTC, FDIC, the Federal Reserve, FSOC, NCUA, OCC, SEC, or Treasury. (See app. II for a list of the regulations within the scope of our review.) In examining the regulatory analyses conducted for these 54 regulations, we found the following. Agencies conducted the required regulatory analyses. The agencies conducted regulatory analysis pursuant to PRA and RFA for all 54 regulations. Agencies also conducted the analyses required under their authorizing statutes. Specifically, CFTC and SEC individually or jointly issued 39 regulations and considered their potential impact, including their benefits and costs in light of each agency’s respective public interest considerations. Agencies issued 19 major rules. Of the 54 regulations that were issued and became effective between July 21, 2011, and July 23, 2012, the agencies identified 19 as being major rules—that is, resulting in or likely to result in a $100 million annual impact on the economy. Specifically, CFTC issued 10 major rules; SEC issued 5 major rules; CFTC and SEC jointly issued 2 major rules; the Federal Reserve issued 1 major rule; and Treasury issued 1 major rule. One of the 19 major rules was subject to E.O. 12866 and its benefit-cost analysis requirement. Of the agencies that issued major rules, only Treasury is subject to E.O. 12,866, which requires a formal assessment of the benefits and costs of an economically significant rule. Thus, as required, Treasury analyzed the benefits and costs of its proposed major rule. Agencies considered the benefits and/or costs in the majority of their rules, but did not generally quantify them. As part of their regulatory analyses or in response to public comments received on their proposed rules, the agencies frequently discussed the potential benefits and costs of their rules. For instance, CFTC and SEC asked for public comments and data on the benefits and costs in all of their proposed rules, and the other regulators generally asked for public comments on the costs and, in many cases, benefits of their proposed rules. For the 54 substantive Dodd-Frank Act regulations that we reviewed, 49 regulations included discussions of potential benefits or costs. The cost discussions primarily were qualitative except for the PRA analysis, which typically included quantitative data (such as hours or dollars spent to comply with paperwork-related requirements). Other potential costs, however, were less frequently quantified. In comparison, the benefit discussions largely were qualitative and framed in terms of the objectives of the rules. Although independent federal financial regulators are not required to follow OMB’s Circular A-4 when developing regulations, they told us that they try to follow this guidance in principle or spirit. As discussed in more detail below, we previously found that the policies and procedures of these agencies did not fully reflect OMB guidance and recommended that they incorporate the guidance more fully in their rulemaking policies and procedures. To assess the extent to which the regulators follow Circular A-4, we examined four major rules (see table 2). Specifically, we examined whether the regulators (1) identified the problem to be addressed by the regulation and the significance of the problem; (2) considered alternatives reflecting the range of statutory discretion; and (3) assessed the benefits and costs of the regulation. While the regulators identified the problem to be addressed in their rule proposals, CFTC, the Federal Reserve, and SEC did not present benefit- cost information in ways consistent with certain key elements of OMB’s Circular A-4. For example, CFTC and SEC did not evaluate the benefits and costs of regulatory alternatives they considered for key provisions compared to their chosen approach. Also, because of the lack of data and for other reasons, the agencies generally did not quantitatively analyze the benefits and, to a lesser degree, costs in their rules. Agencies’ approaches for calculating a baseline against which to compare benefits and costs of regulatory alternatives in their analysis varied, and agency staffs told us that the lack of data complicated such efforts. Two of the major rules we reviewed did not evaluate alternative approaches for key provisions in their rule proposals, but the final rule releases did evaluate alternatives considered by the agencies. In implementing the Dodd-Frank provisions, the agencies exercised discretion in designing the various requirements that composed their rules, such as defining key terms and determining who will be subject to the regulations and how. In their rule proposals, CFTC and SEC identified alternative approaches for key provisions of their rule proposals. For example, CFTC identified the consolidated tape approach—which is used in the U.S. securities markets to publicly report data on securities—as an alternative method for distributing swap transaction data in real time. SEC considered requiring potential whistleblowers to use in-house complaint and reporting procedures before they make a whistleblower submission to SEC. However, CFTC and SEC generally did not evaluate the benefits and costs of their proposed rules’ requirements compared to such alternative requirements. Instead, their rule proposals only presented the proposed set of requirements composing their rules and discussed the potential benefits and costs of their overall regulatory approaches. As part of their proposed rules, CFTC and SEC asked the public for comments on a number of questions, including about possible alternatives to proposed requirements. In their final rules, CFTC and SEC noted that they considered alternatives provided by commenters on the proposed rules and revised their rules, so as to reduce regulatory burden or improve the effectiveness of the rules. This approach generally is consistent with each agency’s guidance on regulatory analysis. However, OMB guidance notes that good regulatory analysis is designed to inform the public and other parts of the government of the effects of alternative actions. Without information about the agency’s evaluation of the benefits and costs of alternatives for key provisions, interested parties may not have a clear understanding of the assumptions underlying the rule’s requirements, which could hinder their ability to comment on proposed rules. One of the rules we reviewed identified the alternative approaches but did not describe the reasons for choosing one alternative over another in its rule proposal. The Federal Reserve identified several alternative approaches for key provisions in the rule proposal for implementing the interchange fee rule and some of their potential benefits and costs. However, it did not determine which of the alternatives would produce greater net benefits or be more cost-effective. Instead, the Federal Reserve asked the public to comment on which alternatives might be preferable to the others based on several factors, including benefits and costs. Federal Reserve staff told us that they took this approach because it was difficult to predict how market participants would respond to the rule. They said that they had discussions with senior management about alternative approaches and analyzed the costs and benefits of the alternatives, including how alternatives could have different impacts on different market participants, but this information was not contained in the proposed rule. In the final rule, responding to public comments, the Federal Reserve selected one alternative over the other alternatives and provided reasons for the selection. Without information about the rationale for selecting one alternative over another in the proposed rule, interested parties may not know how to effectively gauge the magnitude of the potential effects, which could hinder their ability to comment on the proposed rule. Only one rule that we reviewed identified and evaluated alternative regulatory approaches. In its rule proposal, Treasury determined that the fee assessment rule was a significant regulatory action under E.O. 12,866 and, thus, conducted a regulatory impact assessment. In its proposal, Treasury identified, evaluated, and discussed several alternative regulatory approaches. Treasury evaluated the impact of alternative approaches on interested parties and selected the approach it viewed as equitable and cost-effective, consistent with the OMB guidance. The regulators generally did not quantitatively analyze the benefits and, to a lesser degree, costs of the rules we reviewed. CFTC, the Federal Reserve, and SEC did not quantitatively analyze the benefits of these rules. CFTC and SEC monetized and quantified paperwork-related costs under PRA, but did not quantify any other costs. Federal Reserve staff told us that they monetized some of the direct costs of the debit card interchange fee rule. Specifically, they conducted a survey to determine an average debit card interchange fee in 2009 and used that data to help establish the debit card interchange fee cap under the rule. However, while the debit card interchange fee cap information was included in the proposed rule, measures of revenue loss that could result from the rule were not included. In contrast to the other rules we reviewed, Treasury monetized and quantified some costs of the rule beyond paperwork- related costs. Specifically, Treasury provided a range of estimated assessment amounts that described the approximate size of the transfer from assessed companies to the government. As we have reported, the difficulty of reliably estimating the costs of regulations to the financial services industry and the nation has long been recognized, and the benefits of regulation generally are regarded as even more difficult to measure. Similarly, Circular A-4 recognizes that some important benefits and costs may be inherently too difficult to quantify or monetize given current data and methods and recommends a careful evaluation of qualitative benefits and costs. All of the rules we reviewed included qualitative descriptions of the potential benefits and costs associated with the rules. The agencies also generally included qualitative information on the nature, timing, likelihood, location, and distribution of the benefits and costs. For instance, in discussing the benefits of the reporting and public dissemination requirements, CFTC stated that it anticipates that the real-time reporting rule “will generate several overarching, if presently unquantifiable, benefits to swaps market participants and the public generally. These include: mprovements in market quality; price discovery; improved risk management; economies of scale and greater efficiencies; and improved regulatory oversight.” CFTC then went on to describe the ways in which these benefits might accrue to market participants. However, some of the agencies did not discuss the strengths and limitations of the qualitative information and did not discuss key reasons why the benefits and costs could not be quantified. Also, the regulators did not consistently present analysis of any important uncertainties connected with their regulatory decisions. For instance, the Federal Reserve stated that the potential impacts of the debit card interchange fee rule depended in large part on the reaction of certain market actors to the rule. In contrast, we did not find a discussion of any important uncertainties associated with SEC’s whistleblower rules, but SEC staff told us that the inherent uncertainties in making predictions about human behavior was a key reason why it was not possible to engage in a quantitative analysis of the rule. However, we found that the agencies generally based their analyses on the best reasonably available, peer-reviewed economic information. Treasury described certain direct costs associated with complying with the fee assessment rule. Treasury used economic reasoning to identify some benefits or types of benefits associated with the rule, particularly in considering the choice of assessment methodology, which was the area of discretion left by Congress to the agency. We also found the regulators’ approaches for calculating a baseline against which to compare benefits and costs of regulatory approaches varied. OMB’s Circular A-4 states that the baseline should be the best assessment of the way the world would look absent the proposed action. In cases where substantial portions of the rule may simply restate statutory requirements that would be self-implementing, Circular A-4 provides for use of a prestatute baseline—that is, the baseline should reflect the status quo before the statute was enacted. However, the guidance further states that if the agency is able to determine where it has discretion in implementing a statute, it can use a post-statute baseline to evaluate the discretionary elements of the action. CFTC and SEC both did not establish post-statute baselines and instead evaluated the benefits and costs of the discretionary elements of their rules in terms of statutory objectives. Specifically, CFTC evaluated each discretionary element of the real-time reporting rule based on whether it met the statutory objectives to reduce risk, increase transparency, and promote market integrity. Similarly, SEC evaluated each discretionary element of the whistleblower protection rule according to four broad objectives based on statutory goals and the nature of public comments. We found that the Federal Reserve generally took this approach in developing the debit card interchange fee rule. In contrast, Treasury, which is subject to E.O. 12,866, used a post-statute baseline to evaluate the discretionary elements of the fee assessment rule. SEC staff said they would have described the analysis somewhat differently under their new economic analysis guidance (discussed below), which directs staff to consider the overall economic impacts, including both those attributable to congressional mandates and those that result from an exercise of discretion. SEC’s guidance states that this approach often will allow for a more complete picture of a rule’s economic effects, particularly because there are many situations in which it is difficult to distinguish between the mandatory and discretionary components of a rule. Agency staffs told us developing a baseline from which to assess the benefits and costs of what would have happened in the absence of a regulation was complicated by the lack of reliable data to quantify the benefits and costs. For example, CFTC staff told us that they were challenged because little public data were available about the opaque swaps market. Moreover, because the rule created a new regulatory regime, CFTC did not have the data needed for the analysis. Instead, CFTC had to rely on market participants to voluntarily provide it with proprietary data. CFTC staff said that they did receive some proprietary data but that they were incomplete. Similarly, for the whistleblower protection rule, SEC staff said that they asked the public for data in their draft rule but did not receive any. In the absence of data, SEC cited related research in its rule release, but staff noted that they were reluctant to weigh this research too heavily because the programs covered in the research differed in important respects from SEC’s program. In addition, Federal Reserve staff said that quantifying the effects of the debit card interchange fee rule was a major challenge because of the lack of data. Although not subject to E.O. 12,866 and, in turn, OMB Circular A-4, most of the federal regulators told us that they try to follow Circular A-4 in principle or spirit. In our previous review, we found that the policies and procedures of these regulators did not fully reflect OMB guidance and recommended that they incorporate the guidance more fully in their rulemaking policies and procedures. For example, each federal regulator has issued guidance generally explaining how its staff should analyze the benefits and costs of the regulatory approach selected, but unlike the OMB guidance, such guidance generally does not encourage staff to identify and analyze the benefits and costs of available alternative approaches. Since we issued our report, OCC and SEC have revised their guidance, but the other agencies have not yet done so. CFTC last revised its guidance in May 2011, and in May 2012 it signed a Memorandum of Understanding with OMB that allows OMB staff to provide technical assistance to CFTC staff as they consider the benefits and costs of proposed and final rules. Issued in March 2012, SEC guidance on economic analysis for rulemakings closely follows E.O. 12,866 and Circular A-4. Specifically, SEC’s guidance defines the basic elements of good regulatory economic analysis in a manner that closely parallels the elements listed in Circular A-4: (1) a statement of the need for the proposed action; (2) the definition of a baseline against which to measure the likely economic consequences of the proposed regulation; (3) the identification of alternative regulatory approaches; and (4) an evaluation of the benefits and costs—both quantitative and qualitative—of the proposed action and the main alternatives. In addition, the guidance explains these elements and describes the ways rulemaking teams can satisfy each of the elements borrowing directly from Circular A-4. OCC guidance on economic analysis defines the elements included in a full cost-benefit analysis in a similar fashion and includes citations to specific sections of Circular A-4 to guide staff through the application of each element. For other federal financial regulators, by continuing to omit core elements of OMB Circular A-4, their regulatory guidance may cause staff to overlook or omit such best practices in their regulatory analysis. In turn, the analyses produced may lack information that interested parties (including consumers, investors, and other market participants) could use to make more informed comments on proposed rules. For example, in our review of four major rules, we found that most of the agencies did not consistently discuss how they selected one regulatory alternative over another or assess the potential benefits and costs of available alternatives. Without information about the benefits and costs of alternatives that agencies considered, interested parties may not know which alternatives were considered and the effects of such alternatives, which could hinder their ability to comment on proposed rules. More fully incorporating OMB’s guidance into their rulemaking guidance, as we previously recommended, could help agencies produce more robust and transparent rulemakings. Federal financial regulators have continued to coordinate on rulemakings informally, but coordination may not eliminate the potential for differences in related rules. Regulators have coordinated on 19 of the 54 substantive regulations that we reviewed, in some cases voluntarily coordinating their activities and also extending coordination internationally. According to agency staff, most interagency coordination during rulemaking largely was informal and conducted at the staff level. Differences in rules could remain after interagency coordination, because the rules reflected differences in factors such as regulatory jurisdiction or market or product type. While a few regulators have made progress on developing guidance for interagency coordination during rulemaking, most have not. Both the Dodd-Frank Act and the federal financial regulators whom we interviewed recognize the importance of interagency coordination during the rulemaking process. In general, coordination during the rulemaking process occurs when two or more regulators jointly engage in activities to reduce duplication and overlap in regulations. Effective coordination could help regulators minimize or eliminate staff and industry burden, administrative costs, conflicting regulations, unintended consequences, and uncertainty among consumers and markets. Recognizing the importance of coordination, the act imposes specific interagency coordination and consultation requirements and responsibilities on regulators or certain rules. For instance, section 171 (referred to as the Collins Amendment) requires that the appropriate federal banking agencies establish a risk-based capital floor on a consolidated basis. In addition, while section 619 (referred to as the Volcker Rule) does not require the federal banking agencies (FDIC, the Federal Reserve, and OCC) to issue a joint rule together with CFTC and SEC, it requires that they consult and coordinate with each other, in part to better ensure that their regulations are comparable. Further, the act broadly requires some regulators to coordinate when promulgating rules for a particular regulatory area. For example, under Title VII, SEC and CFTC must coordinate and consult with each other and prudential regulators before starting rulemaking or issuing an order on swaps or swap-related subjects—for the express purpose of assuring regulatory consistency and comparability across the rules or orders. The act also includes specific requirements for CFPB. Title X requires CFPB to consult with the appropriate prudential regulators or other federal agencies, both before proposing a rule and during the comment process, regarding consistency with prudential, market, or systemic objectives administered by such agencies. Federal financial regulators also have highlighted the importance of coordination during the rulemaking process. For example, in testifying about the need to coordinate agency rulemakings, FSOC’s chairperson commented on the importance of coordinating both domestically and internationally to prevent risks from migrating to regulatory gaps—as they did before the 2007-2009 financial crisis—and to reduce U.S. vulnerability to another financial crisis. At the same time, we noted in a recent report that the FSOC chairperson has recognized the challenges of coordinating on the Dodd-Frank Act rulemakings assigned to specific FSOC members. He noted that the coordination in the rulemaking process represented a challenge because the Dodd-Frank Act left in place a financial system with multiple, independent agencies with overlapping jurisdictions and different responsibilities. However, the chairperson also noted that certain agencies were working much more closely together than they did before the creation of FSOC. This observation has been repeated by other regulators, whose staffs have told us that interagency coordination in rulemaking has increased since the passage of the Dodd- Frank Act. We found documentation of coordination among the rulemaking agency and other domestic or international regulators for 19 of the 54 substantive regulations that were issued and became effective between July 21, 2011, and July 23, 2012. The act required coordination in 16 of the 19 rulemakings. Specifically, 6 of the 19 regulations were jointly issued by two or more regulators and, thus, inherently required interagency coordination (see table 3). The act stipulated coordination for 10 other regulations. In the Federal Register rule releases, we found evidence documenting the coordination required by the act as well as voluntary coordination with additional regulators. For example, FDIC’s regulation on “Certain Orderly Liquidation Authority Provisions” described voluntary coordination with the Federal Reserve. Similarly, CFTC was required to coordinate with SEC on six swaps regulations it issued, but the agency also coordinated with other regulators on two of those regulations. Further, CFTC coordinated with foreign regulators on all six swaps regulations. The act did not require coordination for the other three regulations for which we found documentation of coordination, indicating that the agencies voluntarily coordinated. For the remaining 35 regulations that we reviewed, which did not require interagency coordination, we did not find any documentation of coordination among the agencies. Of the 19 regulations that we identified as having interagency coordination, we selected three regulations to review in depth and sought to cover as many regulators as possible that were required to coordinate under the Dodd-Frank Act (see table 4). We examined when, how, and the extent to which federal financial regulators coordinated. We also examined efforts undertaken by the regulators to avoid conflicts in the rulemakings. The regulators held some formal interagency meetings early on in the rulemaking process; however, coordination was mostly informal and conducted through e-mail, telephone conversations, and one-on-one conversations between staff. For example, at the initiation stage of the risk-based capital rulemaking, FDIC, OCC, and the Federal Reserve held a principal-level meeting to discuss the major issues relating to the interpretation of the statutory requirement. After this meeting, staffs formed an interagency working group, comprised of staff from each agency who, according to Federal Reserve staff, continually have worked together on numerous capital rules and therefore have a very close working relationship. Likewise, agency staffs said that after the initial formal meetings on the other rulemakings that we reviewed, coordination revolved around informal staff-level discussions. Coordination during the proposed rule drafting stage typically was characterized by staff-level conversations primarily through telephone calls or e-mails and some face- to-face meetings. Staffs would contact each other as issues arose to work out conflicts or differences in agency viewpoints. When issues could not be resolved at the staff level, they were escalated to senior management, but most issues were resolved and most coordination occurred at the staff level throughout the drafting of the proposed rules, according to agency staffs. For all three rulemakings reviewed, agency staffs coordinated at least weekly through the proposal stage with the frequency of coordination escalating as the proposed rule neared issuance. After receiving public comments and while preparing the final rule, agency staffs told us that they continued to coordinate with each other, but the need for and level of interagency coordination varied by rule. For instance, OCC, Federal Reserve, and FDIC staffs said that by the time they reached the stage of drafting the final risk-based capital rule, meetings were less frequent because the group already had worked out most of the details. Coordination between CFTC and SEC also decreased during this stage of the real-time reporting rulemaking. Conversely, CFTC and SEC staffs said that interagency coordination continued to be frequent while drafting the final swaps entities rule because after the proposed rule was issued some differences in underlying definitions remained, such as the definition for “highly leveraged.” The commissions used public comments to the proposed rule to help them interpret and come to consensus on the definitions. CFTC and SEC staffs met regularly in this period to refine drafts, resolve issues, and convene an industry roundtable. The extent to which agencies coordinated with international regulators varied in the three rulemakings that we reviewed. For example, CFTC and SEC coordinated with international regulators on swap rulemakings. For the real-time reporting rule, CFTC coordinated with foreign regulators, such as the Financial Services Authority and the European Commission, which provided ideas on data reporting. On the swap entities rule, CFTC and SEC staffs said that they participated in numerous conference calls and meetings with various international regulators. In contrast, the banking regulators did not meet with any international regulators on the risk-based capital rule. The agency staffs said that they were implementing a straightforward statutory provision that required little interpretation and little amendment to the existing rules; therefore, staffs said they did not need to seek input from international regulators as to how to implement U.S. law. Staffs said that for less narrowly scoped rules, where regulators have more discretion, they are more proactive in reaching out to international regulators. FDIC staff cited, as an example, the risk retention rule, for which they reached out to the European Union to understand their approach. Regulators who were responsible for the three rulemakings that we reviewed said that they tried to identify potential areas of duplication or conflict involving the rules. For the risk-based capital rule, the banking regulators held discussions on regulatory conflict and duplication and concluded that none would be created by this rule. For the swap entity rule and the real-time reporting rule, CFTC and SEC identified potential areas of conflict, which they were able to address through coordination. For example, when developing the real-time reporting rule, CFTC and SEC initially had different approaches about what type of entity would be in charge of disseminating swap transaction data. SEC proposed that only swap data repositories would be required to disseminate real-time data, and CFTC initially proposed to require several different entities to do so. SEC’s proposal, deciding that only swap data repositories would be required to disseminate real-time swap data. Agency staffs said that this harmonization should help to minimize the compliance cost burden placed on market participants and allow for more efficient operation of systems for the public dissemination of swap and security-based swap market data. Swap data repositories are new entities created by the Dodd-Frank Act in order to provide a central facility for swap data reporting and recordkeeping. Under the act, all swaps, whether cleared or uncleared, are required to be reported to registered swap data repositories. Pub. L. No. 111-203, § 727, 124 Stat. 1696 (2010) (codified at 7 U.S.C. 2(a)(13)(G)). In some areas, differences in rules remained after interagency coordination, due to differences in regulatory jurisdiction. In particular, while CFTC and SEC reached consensus on the text for the jointly issued swap entities rule, the regulators outlined different approaches in certain parts of the rule as a result of their regulatory jurisdiction over different product sets. For example, some of the language of the definitions for “major swap participant” and “major security-based swap participant” differs because the agencies each have jurisdiction over different products and some of these products have different histories, markets, and market sizes, according to CFTC and SEC staff. Also, in the real-time reporting rule, CFTC, in its final rule, defined specific data fields to be reported, while SEC, in its proposed rule, outlined broad data categories and required swap data repositories to develop specific reporting protocols. Agency staffs stated that while the approaches were different, they were not inconsistent. The key factors the regulators considered were whether the rules achieved the policy objectives and whether the regulated entities could comply with both agencies’ rules given their differences. It was determined that swap data repositories could develop data reporting protocols that would comply with both agencies’ rules. To document and communicate preliminary staff views on certain issues to senior management, regulators use term sheets throughout the rulemaking process. Although term sheets are primarily internal documents, they were shared with staff at other regulators to communicate views and elicit comments. These term sheets serve as a formal mechanism to help initiate discussions of differences in the regulators’ positions. Term sheets generally are drafted internally by staff at each agency, shared between or among agency staff, and shared with agency principals or senior management. CFTC and SEC created term sheets for both the swap entities rule and the real-time reporting rule. Conversely, for the risk-based capital rule, the banking regulators did not create a term sheet because, according to OCC staff, the statutory requirements for this rule were explicit and therefore a term sheet was not required. However, staff noted that this was different from a standard rulemaking where they typically would draft and share a term sheet. While a few agencies have made progress on developing policies for interagency coordination for their rulemaking, most have not. In November 2011, we reported that most of the federal financial agencies lacked formal policies or procedures to guide their interagency coordination in the rulemaking process. Federal financial regulators informally coordinated on some of the final rules that we reviewed, but most of the agencies lacked written policies and procedures to guide their interagency coordination. Specifically, seven of nine agencies did not have written policies and procedures to facilitate coordination on rulemaking. The written policies and procedures that existed were limited in their scope or applicability. The remaining two regulators, FDIC and OCC, had rulemaking policies that include guidance on developing interagency rules. As we previously reported, documented policies can help ensure that adequate coordination takes place, help to improve interagency relationships, and prevent the duplication of efforts at a time when resources are extremely limited. Since our November 2011 report, we found that OCC and CFPB have further developed guidance on interagency coordination, but the other agencies have not. CFPB has developed guidance that outlines the agency’s approach to interagency consultation in rulemaking. The document generally describes two rounds of consultation when drafting the proposed rule and two rounds when addressing comments and drafting the final rule. The guidance highlights the points in a rulemaking at which staff should reach out to other regulators, the purpose of consultation, and the length of time to allow for responses from regulators. Similarly, OCC updated its rulemaking policy to include more detail on what steps should be taken in coordination and who should be involved. In our November 2011 report, we recommended that FSOC work with the federal financial regulators to establish formal coordination policies for rulemaking that clarify issues, such as when coordination should occur, the process that will be used to solicit and address comments, and what role FSOC should play in facilitating coordination. While FSOC has not implemented this recommendation, staff told us that they have developed coordination processes around specific areas of the Dodd-Frank Act. For example, FSOC staff said that they have coordinated closely with FDIC on all rulemakings under Title II. In addition, FSOC developed written guidance for coordination on rulemakings for enhanced prudential standards for bank holding companies with $50 billion or more in total consolidated assets and nonbank financial companies designated by FSOC for Federal Reserve supervision under sections 165 and 166 of the act. However, in a September 2012 report, we noted that a number of industry representatives questioned why FSOC could not play a greater In that report, role in coordinating member agencies’ rulemaking efforts.we further noted that the FSOC chairperson, in consultation with the other FSOC members, is responsible for regular consultation with the financial regulatory entities and other appropriate organizations of foreign governments or international organizations. We also reiterated our previous recommendation by stating that FSOC should establish formal collaboration and coordination policies for rulemaking. The full impact of the Dodd-Frank Act remains uncertain. Although federal agencies continue to implement the act through rulemakings, much work remains. For example, according to one estimate, regulators have finalized less than half of the total rules that may be needed to implement the act. Furthermore, sufficient time has not elapsed to measure the impact of those rules that are final and effective. As we previously noted, even when the act’s reforms are fully implemented, it will take time for the financial services industry to comply with the array of new regulations. The evolving nature of implementation makes isolating the effects of the Dodd-Frank Act on the U.S. financial marketplace difficult. This task is made more difficult by the many factors that can affect the financial marketplace, including factors that could have an even greater impact than the act. Recognizing these limitations and difficulties, we developed a multipronged approach to analyze current data and trends that might be indicative of some of the Dodd-Frank Act’s initial impacts, as institutions react to issued and expected rules. First, the act contains provisions that serve to enhance the resilience of certain bank and nonbank financial companies and reduce the potential for financial distress in any one of these companies to affect the financial system and economy. Specifically, the Dodd-Frank Act requires the Federal Reserve to impose enhanced prudential standards and oversight on bank holding companies with $50 billion or more in total consolidated assets and nonbank financial We developed indicators to monitor companies designated by FSOC.changes in certain SIFI characteristics. Although the indicators may be suggestive of the act’s impact, our indicators do not identify causal links between their changes and the act. Further, many other factors can affect SIFIs and, thus, the indicators. As new data become available, we expect to update and, as warranted, revise our indicators and create additional ones to cover other provisions. Second, we used difference-in-difference analysis to infer the act’s impact on the provision of credit by and the safety and soundness of bank SIFIs. The analysis is subject to limitations, in part because factors other than the act could be affecting these entities. Third, we analyzed the impact of several major rules that were issued pursuant to the Dodd-Frank Act and have been final for around a year or more. The 2007-2009 financial crisis demonstrated that some financial institutions, including some nonbank financial companies (e.g., AIG), had grown so large, interconnected, complex, and leveraged, that their failure could threaten the stability of the U.S. financial system and the global economy. Financial institutions, markets, and infrastructure that make up the U.S. financial system provide services to the U.S. and global economies, such as helping to allocate funds, allowing households and businesses to manage their risks, and facilitating financial transactions that support economic activity. The sudden collapses and near-collapses of major financial institutions, including major nonbank financial institutions, were among the most destabilizing events of the 2007-2009 financial crisis. In addition, large, complex financial institutions that are perceived to be “too big to fail” can increase uncertainty in periods of market turmoil and reinforce destabilizing reactions within the financial system. According to its legislative history, the Dodd-Frank Act contains provisions intended to reduce the risk of failure of a large, complex financial institution and the damage that such a failure could do to the economy. Such provisions include (1) establishing FSOC to identify and respond to emerging threats to the stability of the U.S. financial system; (2) authorizing FSOC to designate a nonbank financial company for Federal Reserve supervision if FSOC determines it could pose a threat to the financial stability of the United States based on the company’s size, leverage, interconnectedness, or other factors; and (3) directing the Federal Reserve to impose enhanced prudential standards and oversight on bank holding companies with $50 billion or more in total consolidated assets (referred to as bank SIFIs in this report) and nonbank financial companies designated by FSOC (referred to as nonbank SIFIs in this report). The Dodd-Frank Act also is intended to reduce market expectations of future federal rescues of large, interconnected, and complex firms using taxpayer dollars. Under the act, bank holding companies with $50 billion or more in total consolidated assets and nonbank financial companies designated by FSOC for Federal Reserve supervision are required to develop plans for their rapid and orderly resolution. Additionally, FDIC is given new orderly liquidation authority to act as a receiver of a troubled financial firm whose failure could threaten financial stability so as to protect the U.S. financial system and the wider economy. Some Dodd-Frank Act provisions may result in adjustments to SIFIs’ size, interconnectedness, complexity, leverage, or liquidity over time. We developed indicators to monitor changes in some of these SIFI characteristics. The size and complexity indicators reflect the potential for a single company’s financial distress to affect the financial system and economy. The leverage and liquidity indicators reflect a SIFI’s resilience to shocks or its vulnerability to financial distress. FSOC has not yet designated any nonbank financial firms for Federal Reserve supervision. As a result, we focus our analysis on U.S. bank SIFIs. Our indicators have limitations. For example, the indicators do not identify causal links between changes in SIFI characteristics and the act. Rather, the indicators track or begin to track changes in the size, complexity, leverage, and liquidity of SIFIs over the period since the Dodd-Frank Act was passed to examine whether the changes are consistent with the act. However, other factors—including the economic downturn, international banking standards agreed upon by the Basel Committee on Banking Supervision (Basel Committee), European debt crisis, and monetary policy actions—also affect bank holding companies and, thus, the indicators. These factors may have a greater effect than the Dodd-Frank Act on SIFIs. In addition, some rules implementing SIFI-related provisions have not yet been proposed or finalized. Thus, trends in our indicators include the effects of these rules only insofar as SIFIs have changed their behavior in response to issued rules and in anticipation of expected rules. In this sense, our indicators provide a baseline against which to compare future trends. Table 5 summarizes the changes in our bank SIFI indicators. The size indicators do not provide a clear trend between the third quarter of 2010 and the second quarter of 2012. Additionally, we have only one data point in the complexity indicator, but our data suggest that the largest bank SIFIs generally were more complex organizationally than other bank SIFIs. Lastly, the indicators suggest that bank SIFIs, on average, have become less leveraged since the third quarter of 2010, and their liquidity also appears to have improved. Trends in our leverage and liquidity indicators appear to be consistent with an improvement in SIFIs’ resilience to shocks. Ben S. Bernanke, “Financial Reform to Address Systemic Risk,” (Speech to the Council on Foreign Relations, Washington, D.C., Mar. 10, 2009). size of a financial institution may prevent the institution from growing so large that it is perceived by the market as too big to fail, such limits also may prevent the institution from achieving economies of scale and benefiting from diversification. We developed three indicators of size. The first indicator tracks the number of bank SIFIs. The second indicator measures a SIFI’s size based on the total assets on its balance sheet. The third indicator measures the extent to which industry assets are concentrated among the individual SIFIs, reflecting a SIFI’s size relative to the size of the industry. A limitation of these indicators is that they do not include an institution’s off-balance sheet activities and thus may understate the amount of financial services or intermediation an institution provides. Furthermore, asset size alone is not an accurate determinant of systemic risk, as an institution’s systemic risk significance also depends on other factors, such as its complexity and interconnectedness. As shown in figure 1, seven U.S. bank SIFIs had more than $500 billion in total consolidated assets (referred to as large bank SIFIs in this report) in the third quarter of 2010 and in the second quarter of 2012.bank SIFIs were considerably larger than the other bank SIFIs. We provided a draft of this report to CFPB, CFTC, FDIC, the Federal Reserve Board, FSOC, NCUA, OCC, OFR, SEC, and Treasury for review and comment. SEC and Treasury provided written comments that we have reprinted in appendixes VII and VIII, respectively. All of the agencies also provided technical comments, which we have incorporated, as appropriate. In their comments, the agencies neither agreed nor disagreed with the report’s findings. In its letter, Treasury noted that FSOC agrees that successful implementation of the Dodd-Frank Act rulemakings will require member agencies to work together, even if such coordination is not specifically required under the Dodd-Frank Act. Treasury also noted that FSOC has served as a forum for discussion among members and member agencies, through various FSOC meetings, committee meetings, and subcommittee meetings. Finally, the letter describes FSOC’s effort to continue monitor potential risks to the financial stability and implement other statutory requirements. In its letter, SEC noted that it revised its guidance on economic analysis in March 2012, in part in response to a recommendation in our 2011 report that federal financial regulators more fully incorporate OMB’s regulatory analysis guidance into their rulemaking policies. SEC’s letter stated that the revised guidance already has improved the quality of economic analysis in its rulemakings and internal rule-writing processes. SEC also noted that FSOC has fostered a healthy and positive sense of collaboration among the financial regulators. SEC remains amenable to working with FSOC on formal coordination policies, as GAO previously recommended, but noted that FSOC's efforts should fully respect the independence of the respective member agencies regarding the substance of the rules for which they are responsible and the mission of FSOC itself. We are sending copies of this report to CFPB, CFTC, FDIC, the Federal Reserve Board, FSOC, NCUA, OCC, OFR, SEC, Treasury, interested congressional committees, members, and others. This report will also be available at no charge on our website at http://www.gao.gov. Should you or your staff have questions concerning this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IX. Our objectives in this report were to examine (1) the regulatory analyses, including benefit-cost analyses, federal financial regulators have performed to assess the potential impact of selected final rules issued pursuant to the Dodd-Frank Act; (2) how federal financial regulators consulted with each other in implementing selected final rules issued pursuant to the Dodd-Frank Act to avoid duplication or conflicts; and (3) what is known about the impact of the final Dodd-Frank Act regulations on the financial markets. To address the first two objectives, we limited our analysis to the final rules issued pursuant to the Dodd-Frank Act that were effective between July 21, 2011, and July 23, 2012, a total of 66 rules (see app. II). To identify these rules, we used a website maintained by the Federal Reserve Bank of St. Louis that tracks Dodd-Frank Act regulations. We corroborated the data with information on Dodd-Frank Act rulemaking compiled by the law firm Davis Polk & Wardwell LLP. To address our first objective, we reviewed statutes, regulations, GAO studies, and other documentation to identify the benefit-cost or similar analyses federal financial regulators are required to conduct in conjunction with rulemaking. For each of the 66 rules within our scope, we prepared individual summaries using a data collection instrument (DCI). The criteria used in the DCI were generally developed based on the regulatory analyses required of federal financial regulators and Office of Management and Budget (OMB) Circular A-4, which is considered best practice for regulatory analysis. We used the completed summaries to develop a table showing the extent to which the federal financial regulators addressed the criteria for each of the Dodd-Frank Act regulations. We selected 4 of the 66 rules for in-depth review, comparing the benefit-cost or similar analyses to specific principles in OMB Circular A-4. We selected the rules for in-depth review based on whether the rule was deemed a major rule (i.e., whether it is anticipated to have an annual effect on the economy of $100 million or more) by the responsible agency and OMB. We generally found that the financial regulators do not state in the Federal Register notice whether the rule is major. However, we learned that regulators are required to submit major rules to GAO under the Congressional Review Act (CRA) for the purpose of ensuring that the regulators followed certain requirements in conducting the rulemaking, and GAO maintains a database of major rules. Our search of the CRA database showed that federal financial regulators issued 19 major Dodd- Frank Act rules within our scope. To further narrow the list of rules for in- depth review, we determined to include at least one rule from each of the federal financial regulators. We identified major rules issued by only three financial regulators: the Commodity Futures Trading Commission (CFTC), the Board of Governors of the Federal Reserve (Federal Reserve), and the Securities and Exchange Commission (SEC). In addition, the Department of the Treasury (Treasury) issued a major rule during the scope period. The Federal Reserve and Treasury each issued only one major rule during our scope period—the Debit Card Interchange Fee rule and the Assessment of Fees on Large Bank Holding Companies to Cover the Expenses of the Financial Research Fund, respectively. SEC and CFTC issued multiple major rules during the period. To further narrow the list of rules for in-depth review, we determined to include only rules implementing a new regulatory authority rather than amending a preexisting regulatory authority. For SEC, only one rule met this criterion—the Securities Whistleblower Incentives and Protections rule. CFTC issued several major rules that met this criterion so to further narrow the list of rules for in-depth review, we consulted with a former CFTC economist and solicited his opinion whether it would be appropriate for GAO to assess the Real-Time Public Reporting of Swap Transaction Data rule, and he agreed. To compare these rules to the principles in Circular A-4, we developed a DCI with the principles and applied the DCI to all four rules. In conducting each individual analysis, we reviewed the Federal Register notices prepared by the agencies during the course of the rulemaking. We also interviewed officials from CFTC, the Federal Reserve, SEC, and Treasury to determine the extent to which benefit-cost or similar analyses were conducted. To address our second objective, we reviewed the Dodd-Frank Act, regulations, and studies, including GAO reports, to identify the coordination and consultation requirements federal financial regulators are required to conduct in conjunction with rulemaking. For each of the 66 rules in our scope, we reviewed the rule releases to determine the rules on which agencies coordinated with other federal financial regulators and international financial regulators. From our review of the rule releases, we developed a table that shows the rules that involved coordination, agencies involved, nature of coordination, whether coordination was required or voluntary, and whether the agencies coordinated with international regulators. Rules that may have involved interagency coordination in the rulemaking but did not expressly mention such coordination in the rule release are not included in this table. Of the 19 rules that we determined involved interagency coordination, we selected 3 rules to review in depth to assess how and the extent to which federal financial regulators coordinated, focusing on actions they took to avoid conflict and duplication in rulemakings. In selecting rules to review in depth, we sought to include at least one rule that was jointly issued and therefore implicitly required coordination and at least one rule that was issued by a single agency and involved coordination with another agency. We also sought broad coverage of agencies issuing substantive Dodd- Frank Act rules. We ultimately selected two joint rules and one rule issued by a single agency, including rules issued by FDIC, OCC, Federal Reserve, CFTC, and SEC. In reviewing each rule, we reviewed the Federal Register notices for each rule, and we interviewed officials from each agency to determine how and the extent to which coordination took place to avoid duplication and conflict. We also interviewed officials at FSOC and CFPB to get an understanding of their role in interagency coordination for Dodd-Frank Act rulemakings. To address our third objective, we took a multipronged approach to analyze what is known about the impact of the Dodd-Frank Act on the financial marketplace. First, the act contains provisions that serve to enhance the resilience of certain bank and nonbank financial companies and reduce the potential for any one of these companies to affect the financial system and economy. Specifically, the Dodd-Frank Act requires the Federal Reserve to impose enhanced prudential standards and oversight on bank holding companies with $50 billion or more in total consolidated assets and nonbank financial companies designated by FSOC. For purposes of this report, we refer to these bank and nonbank financial companies as bank systemically important financial institutions (bank SIFI) and nonbank systemically important financial institutions (nonbank SIFI), respectively, or collectively as SIFIs. We developed indicators to monitor changes in certain characteristics of SIFIs that may be suggestive of the impact of these reforms. FSOC has not yet designated any nonbank financial firms for Federal Reserve enhanced To supervision. As a result, we focus our analysis on U.S. bank SIFIs.understand the rationale behind the act’s focus on enhanced SIFI regulation and oversight, we reviewed the legislative history of the act, the act itself, related regulations, academic studies, GAO and agency reports, and other relevant documentation. To inform our choice of indicators, we analyzed the provisions and related rulemakings most relevant to bank SIFIs. Our analysis and indicators for this report focus on bank SIFIs’ asset size, interconnectedness, complexity, leverage, and liquidity. We developed our indicators of bank SIFIs’ size, leverage, and liquidity using quarterly data for bank holding companies from SNL Financial and quarterly data on the gross domestic product (GDP) deflator from the Bureau of Economic Analysis, both for the period from 2006 quarter 1 to 2012 quarter 2. We developed our indicators of bank SIFIs’ complexity using data from the Federal Reserve Board’s National Information Center as of October 2012. As new data become available, we expect to update and, as warranted, revise our indicators and create additional indicators to cover other provisions. Second, we use difference-in-difference regression analysis to infer the act’s impact on the provision of credit by and the safety and soundness of U.S. bank SIFIs. The key element of our analysis is that the Dodd-Frank Act subjects some bank holding companies to enhanced oversight and regulation but not other bank holding companies. Specifically, the act requires the Federal Reserve to impose a number of enhanced prudential standards on bank holding companies with total consolidated assets of $50 billion or more (bank SIFI) , while bank holding companies with assets less than $50 billion (non-SIFI banks) are not subject to such enhanced oversight and regulation. As a result, we were able to compare funding costs, capital adequacy, asset quality, earnings, and liquidity for bank SIFIs and non-SIFI banks before and after the Dodd-Frank Act. All else being equal, the difference in the differences is the inferred effect of the Dodd-Frank Act on bank SIFIs. For our analysis, we used quarterly data on bank holding companies from SNL Financial and quarterly data on commercial banks and savings banks from FDIC and the Federal Financial Institutions Examinations Council, all for the period from 2006 quarter 1 to 2012 quarter 2 (see app. IV for details). Lastly, for all of our indicators, we obtained and addressed high-level comments and suggestions from FSOC staff and two other market experts. Third, we analyze the impact of several major rules that were issued pursuant to the Dodd-Frank Act and have been final for around a year or more. There were 44 final rules as of July 21, 2011, 7 of which were major rules. We judgmentally selected 4 out of those 7 rules for impact analyses, based largely on data availability. Our selected rules implement provisions that serve specific investor or consumer protection purposes. We first analyzed the Federal Reserve’s Debit Interchange Fees and Routing Rule (Regulation II). As part of that work, we reviewed selected statutes and regulations, analyzed available data and documents from the Federal Reserve, GAO, and market participants and experts, and interviewed agency officials and market experts. Additionally, we analyzed two SEC rules on asset-backed securities (ABS): Issuer Review of Assets in Offerings of ABS and Disclosure for ABS Required by Section 945 and 943 of the Act, respectively. To do this, we reviewed selected statutes and regulations and analyzed data on ABS issuances obtained from the Securities Industry and Financial Markets Association (SIFMA). Lastly, we analyzed SEC’s rule on Shareholder Approval of Executive Compensation and Golden Parachute Compensation. As part of that analysis, we reviewed selected regulations and analyzed available data on shareholder votes on executive compensation that we obtained from Institutional Shareholder Services, Inc., a proxy advisory firm that advises institutional investors on how to vote proxies and provides consulting services to corporations seeking to improve their corporate governance. For all of the data described above, we assessed the reliability of the data and found it to be reliable for our purposes. We conducted this performance audit from December 2011 to December 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following table lists the Dodd-Frank Act rules that we identified as final and effective during the scope period for this review—July 21, 2011, and July 23, 2012 The following table lists the Dodd-Frank Act rules that we identified as final and effective during the scope period for our first review—July 21, 2010, and July 21, 2011. The Dodd-Frank Act contains several provisions that apply to nonbank financial companies designated by the Financial Stability Oversight Council for Federal Reserve supervision and enhanced prudential standards (nonbank SIFI) and bank holding companies with $50 billion or more in total consolidated assets (bank SIFI). Table 12 summarizes those provisions and the rulemakings, including their status, to implement those provisions. We conducted an econometric analysis to assess the impact of the Dodd- Frank Act’s new requirements for bank SIFIs on (1) the cost of credit they provide and (2) their safety and soundness. Our multivariate econometric model used a difference-in-difference design that exploits the fact that the Dodd-Frank Act subjects bank holding companies with total consolidated assets of $50 billion or more to enhanced regulation by the Federal Reserve but not others, so we can view bank holding companies with total consolidated assets of $50 billion or more (bank SIFIs) as the treatment group and other bank holding companies as the control group. We compared the changes in the characteristics of U.S. bank SIFIs over time to changes in the characteristics of other U.S. bank holding companies over time. All else being equal, the difference in the differences is the impact of new requirements for bank SIFIs primarily tied to enhanced regulation and oversight under the Federal Reserve. Our general regression specification is the following: ybq = α + β + γSIFIbq + X’bqΘ + εbq where b denotes the bank holding company, q denotes the quarter, ybq is the dependent variable, α is a bank holding company-specific intercept, β is a quarter-specific intercept, SIFIbq is an indicator variable that equals 1 if bank holding company b is a SIFI in quarter q and 0 otherwise, Xbq is a list of other independent variables, and εbq is an error term. We estimated the parameters of the model using quarterly data on top-tier bank holding companies for the period from the first quarter of 2006 to the second quarter of 2012. The parameters of interest are the γ, the coefficients on the SIFI indicators in the quarters starting with the treatment start date of the third quarter of 2010 through the second quarter of 2012. The Dodd-Frank Act was enacted in July 2010 (the third quarter of 2010), so the SIFI indicator is equal to zero for all bank holding companies for all quarters from the first quarter of 2006 to the second quarter of 2010. The SIFI indicator is equal to 1 for all bank holding companies with assets of $50 billion or more for the third quarter of 2010 through the second quarter of 2012 and the SIFI indicator is equal to zero for all other bank holding companies for those quarters. Thus, for quarters from the third of 2010 to the second of 2012, the parameter γ measures the average difference in the dependent variable between bank SIFIs and other bank holding companies in those quarters relative to the base quarter. We use different dependent variables (ybq) to estimate the impacts of the new requirements for SIFIs on the cost of credit provided by bank SIFIs and on various aspects of bank SIFIs’ safety and soundness, including capital adequacy, asset quality, earnings, and liquidity. Funding cost. A bank holding company’s funding cost is the cost of deposits or liabilities that it then uses to make loans or otherwise acquire assets. More specifically, a bank holding company’s funding cost is the interest rate it pays when it borrows funds. All else being equal, the greater a bank holding company’s funding cost, the greater the interest rate it charges when it makes loans. We measure funding cost as an institution’s interest expense as a percent of interest- bearing liabilities. Capital adequacy. Capital absorbs losses, promotes public confidence, helps restrict excessive asset growth, and provides protection to creditors. We use two alternative measures of capital adequacy: tangible common equity as a percent of total assets and tangible common equity as a percent of risk-weighted assets. Asset quality. Asset quality reflects the quantity of existing and potential credit risk associated with the institution’s loan and investment portfolios and other assets, as well as off-balance sheet transactions. Asset quality also reflects the ability of management to identify and manage credit risk. We measure asset quality as performing assets as a percent of total assets, where performing assets are equal to total assets less assets 90 days or more past due and still accruing interest, assets in non-accrual status, and other real estate owned. Earnings. Earnings are the initial safeguard against the risks of engaging in the banking business and represent the first line of defense against capital depletion that can result from declining asset values. We measure earnings as net income as a percent of total assets. Liquidity. Liquidity represents the ability to fund assets and meet obligations as they become due, and liquidity risk is the risk of not being able to obtain funds at a reasonable price within a reasonable time period to meet obligations as they become due. We use two different variables to measure liquidity. The first variable is liquid assets as a percent of volatile liabilities. This variable is similar in spirit to the liquidity coverage ratio introduced by the Basel Committee on Banking Supervision and measures a bank holding company’s capacity to meet its liquidity needs under a significantly severe liquidity stress scenario. We measure liquid assets as the sum of cash and balances due from depository institutions, securities (less pledged securities), federal funds sold and reverse repurchases, and trading assets. We measure volatile liabilities as the sum of federal funds purchased and repurchase agreements, trading liabilities (less derivatives with negative fair value), other borrowed funds, deposits held in foreign offices, and large time deposits held in domestic offices. Large time deposits are defined as time deposits greater than $100,000 prior to March 2010 and as time deposits greater than $250,000 in and after March 2010. The second liquidity variable is stable liabilities as a percent of total liabilities. This variable measures the extent to which a bank holding company relies on stable funding sources to finance its assets and activities. This variable is related in spirit to the net stable funding ratio introduced by the Basel Committee on Banking Supervision, which measures the amount of stable funding based on the liquidity characteristics of an institution’s assets and activities over a 1 year horizon. We measure stable funding as total liabilities minus volatile liabilities as described earlier. Finally, we include a limited number of independent variables (Xbq) to control for things that may differentially affect SIFIs and non-SIFIs in the quarters since the Dodd-Frank Act was enacted. We include these variables to reduce the likelihood that our estimates of the impact of new requirements for SIFIs are reflecting something other than the impact of the Dodd-Frank Act’s new requirements for SIFIs. Nontraditional income. Nontraditional income generally captures income from capital market activities. Bank holding companies with more nontraditional income are likely to have different business models than those with more income from traditional banking activities. Changes in capital markets in the period since the Dodd- Frank Act was enacted may have had a greater effect on bank holding companies with more nontraditional income. If bank SIFIs typically have more nontraditional income than other bank holding companies, then changes in capital markets in the time since the Dodd-Frank Act was enacted may have differentially affected the two groups. We measure nontraditional income as the sum of trading revenue; investment banking, advisory, brokerage, and underwriting fees and commissions; venture capital revenue; insurance commissions and fees; and interest income from trading assets less associated interest expense, and we express nontraditional income as a percent of operating revenue. Securitization income. Bank holding companies with more income from securitization are likely to have different business models than those with more income from traditional banking associated with an originate-to-hold strategy for loans. Changes in the market for securitized products in the period since the Dodd-Frank Act was enacted may thus have had a greater effect on bank holding companies with more securitization income. If bank SIFIs typically have more securitization income than other bank holding companies, then changes in the market for securitized products in the time since the Dodd-Frank Act was enacted may have differentially affected the two groups. We measure securitization income as the sum of net servicing fees, net securitization income, and interest and dividend income on mortgage-backed securities minus associated interest expense, and we express securitization as a percent of operating revenue. Operating revenue is the sum of interest income and noninterest income less interest expense and loan loss provisions. Foreign exposure. Changes in other countries, such as the sovereign debt crisis in Europe, may have a larger effect on bank holding companies with more foreign exposure. If bank SIFIs typically have more foreign exposure than other bank holding companies, then changes in foreign markets may have differentially affected the two groups. We measure foreign exposure as the sum of foreign debt securities (held-to-maturity and available-for-sale), foreign bank loans, commercial and industrial loans to non-U.S. addresses, and foreign government loans. We express foreign exposure as a percent of total assets. Size. We include size because bank SIFIs tend to be larger than other bank holding companies, and market pressures or other forces not otherwise accounted for may have differentially affected large and small bank holding companies in the time since the Dodd-Frank Act was enacted. We measure the size of a bank holding company as the natural logarithm of its total assets. TARP participation. We control for whether or not a bank holding company participated in the Troubled Asset Relief Program (TARP) to differentiate any impact that this program may have had from the impact of the Dodd-Frank Act. We also conducted several sets of robustness checks: We restricted our sample to the set of institutions with assets that are “close” to the $50 billion cutoff for enhanced prudential regulation for bank SIFIs. Specifically, we analyzed two restricted samples of bank holding companies: (1) bank holding companies with assets between $1 billion and $100 billion and (2) bank holding companies with assets between $25 billion and $75 billion. We examined different treatment start dates. Specifically, we allowed the Dodd-Frank Act’s new requirements for SIFIs to have an impact in 2009q3, 1 year prior to the passage of the act. We did so to allow for the possibility that institutions began to react to the act’s requirements in anticipation of the act being passed. We analyzed alternative measures of capital adequacy, including equity capital as a percent of total assets and Tier 1 capital as a percent of risk-weighted assets. We analyzed commercial banks and savings banks (banks). In this case, we identified a bank as a SIFI if it is a subsidiary of a SIFI bank holding company. We conducted our analysis using quarterly data on bank holding companies from the Federal Reserve Board and SNL Financial for the period from the first quarter of 2006 to the second quarter of 2012. We also used quarterly data on commercial banks and savings banks from the Federal Deposit Insurance Corporation (FDIC), Federal Financial Institutions Exanimation Council (FFIEC), and SNL Financial for the same time period for one of our robustness checks. The Dodd-Frank Act appears to be associated with an increase in bank SIFIs’ funding costs in the second quarter of 2012, but not in other quarters (see table 13). Over the period from the third quarter of 2010 to the second quarter of 2012, bank SIFIs’ funding costs ranged from about 0.02 percentage points lower to about 0.05 percentage points higher than they otherwise would have been since the Dodd-Frank Act. As a group, the estimates are jointly significant. However, the individual estimates are not significantly different from zero for quarters other than the second quarter of 2012. These estimates suggest that the Dodd-Frank Act’s new requirements for SIFIs have had little effect on bank SIFIs’ funding costs. To the extent that borrowing costs are a function of funding costs, the new requirements for SIFIs likely have had little effect on the cost of credit thus far. Our results suggest that the Dodd-Frank Act is associated with improvements in most aspects of bank SIFIs’ safety and soundness. Bank SIFIs appear to be holding more capital than they otherwise would have held since the Dodd-Frank Act was enacted. The quality of assets on the balance sheets of bank SIFIs also seems to have improved since enactment. The act is associated with higher earnings for bank SIFIs in the first four quarters after enactment. It is also associated with improved liquidity as measured by the extent to which a bank holding company is using stable sources of funding. Only liquidity measured by the capacity of a bank holding company’s liquid assets to cover its volatile liabilities has not clearly improved since the enactment of the act. Thus, the Dodd- Frank Act appears to be broadly associated with improvements in most indicators of safety and soundness for bank SIFIs. Our approach allows us to partially differentiate changes in funding costs, capital adequacy, asset quality, earnings, and liquidity associated with the Dodd-Frank Act from changes due to other factors. However, several factors make isolating and measuring the impact of the Dodd-Frank Act’s new requirements for SIFIs challenging. The effects of the Dodd-Frank Act cannot be differentiated from simultaneous changes in economic conditions, such as the pace of the recovery from the recent recession, or regulations, such as those stemming from Basel III, that may differentially affect bank SIFIs and other bank holding companies. In addition, many of the new requirements for SIFIs have yet to be implemented. For example, the Federal Reserve has indicated that it will impose a capital surcharge and liquidity ratios on at least some SIFIs, but the exact form and scope of these requirements is not yet known. Nevertheless, our estimates are suggestive of the initial effects of the Dodd-Frank Act on bank SIFIs and provide a baseline against which to compare future trends. The results of our robustness checks are as follows: Our results are generally robust to restricting the set of bank holding companies we analyze to those with assets of $1 billion-$100 billion. Our results are not generally robust to restricting the set of bank holding companies we analyze to those with assets of $25 billion-$75 billion, but this is likely to be a result of the small number of bank holding companies (29) that fit this criteria. Our results are generally robust to starting the “treatment” in 2009 Q3, 1 year prior to the passage of the Dodd-Frank Act. In addition, our estimates suggest that the impact of new requirements for SIFIs of the Dodd-Frank Act may have preceded the enactment of the act itself. This finding is consistent with the theory that bank holding companies began to change their behavior in anticipation of the act’s requirements, perhaps as information about the content of the act became available and the likelihood of its passage increased. However, there may be other explanations, including anticipation of Basel III requirements, reactions to stress tests, and market pressures to improve capital adequacy and liquidity. Our results for the impact on capital adequacy are generally similar for alternative measures of capital adequacy. Our results for banks’ funding costs, asset quality, earnings, and liquidity as measured by liquid assets as a percent of volatile liabilities were generally similar to our baseline results for bank holding companies, but our results for capital adequacy and liquidity as measured by stable liabilities as a percent of total liabilities were not. The differences may reflect the impact of nonbank subsidiaries on bank holding companies or a number of other factors. The Federal Reserve’s adoption of Regulation II (Debit Card Interchange Fees and Routing), which implements section 1075 of the Dodd-Frank Act, generally has reduced debit card interchange fees. However, debit card issuers, payment card networks, and merchants are continuing to adjust strategically to the rule; thus, the rule’s impact has not yet been fully realized. Typically, consumers use debit cards as a cashless form of payment that electronically accesses funds from a cardholder’s bank account. A consumer using a debit card authenticates and completes a transaction by entering a personal identification number (PIN) or a signature. The parties involved in a debit card transaction are (1) the customer or debit cardholder; (2) the bank that issued the debit card to the customer (issuer bank); (3) the merchant; (4) the merchant’s bank (called the acquirer bank); and (4) the payment card network that processes the transaction between the merchant acquirer bank and the issuer bank. In a debit transaction, the merchant receives the amount of the purchase minus a fee that it must pay to its acquirer bank. This fee includes the debit interchange fee that the acquirer bank pays to the issuer bank. Interchange fees generally combine an ad-valorem component, which depends on the amount of the transaction, and a fixed- fee component. Additionally, before Regulation II was implemented, fees varied more widely based on, among other things, the type of merchant. Although payment card networks do not receive the debit interchange fees, they set the fees. Debit cards represent a two-sided market that involves cardholders and merchants. Cardholders benefit if their cards are accepted by a wide range of merchants, and merchants benefit if their ability to accept cards results in higher sales. In theory, a card network sets its interchange fees to balance the demand on the two sides of the market. It sets interchange fees high enough to attract issuers to issue debit cards processed by the network but low enough for merchants to be willing to accept the debit cards. Before the enactment of section 1075 of the Dodd-Frank Act, debit interchange fees had been increasing, creating controversy in the industry about the appropriate level of debit interchange fees in the United States, which some have stated were among the highest in the world. For example, some merchants stated that network competition led to higher, not lower, interchange fees as networks strived to attract issuer banks (who ultimately receive interchange fee revenue). Section 1075 amends the Electronic Fund Transfer Act (EFTA) by adding a new section 920 regarding interchange transaction fees and rules for payment card transactions. As required by EFTA section 920, Regulation II establishes standards for assessing whether debit card interchange fees received by issuers are reasonable and proportional to the costs incurred by issuers for electronic debit transactions. The rule sets a cap on the maximum permissible interchange fee that an issuer may receive for an electronic debit transaction at $0.21 per transaction, plus 5 basis points multiplied by the transaction’s value. An issuer bank that complies with Regulation II’s fraud-prevention standards may receive no more than an additional 1 cent per transaction. The fee cap became effective on October 1, 2011. However, as required by EFTA section 920, the rule exempts from the fee cap issuers that have, together with their affiliates, less than $10 billion in assets, and transactions made using debit cards issued pursuant to government-administered payment programs or certain reloadable prepaid cards. In addition, Regulation II prohibits issuers and card networks from restricting the number of networks over which electronic debit transactions may be processed to less than two unaffiliated networks. This prohibition became effective on April 1, 2012. The rule further prohibits issuers and networks from inhibiting a merchant from directing the routing of an electronic debit transaction over any network allowed by the issuer.October 1, 2011. Thus far, large banks that issue debit cards have experienced a decline in their debit interchange fees as a result of Regulation II, but small banks generally have not. As noted above, issuers that, together with their affiliates, have $10 billion or more in assets are subject to the debit card interchange fee cap. According to the Federal Reserve, 568 banks were Issuers below the $10 subject to the fee cap in 2012 (covered issuers).billion asset threshold are exempt from the fee cap (exempt issuers). According to the Federal Reserve, over 14,300 banks, credit unions, savings and loans, and savings banks were exempt from the fee cap in 2012. Initial data collected by the Federal Reserve indicate that covered issuers have experienced a significant decline in their debit interchange fees and fee income as a result of Regulation II. Data published by the Federal Reserve show that 15 of 16 card networks provided a lower interchange fee, on average, to covered issuers after the rule took effect.Specifically, the data show that the average interchange fee received by covered issuers declined 52 percent, from $0.50 in the first three quarters of 2011 to $0.24 in the fourth quarter. During the same period, the interchange fee as a percentage of the average transaction value for covered issuers declined from 1.29 percent to 0.60 percent. Our own analysis also suggests that the fee cap is associated with reduced interchange fee income for covered banks. To further assess the impact of the fee cap on covered banks, we conducted an econometric analysis of debit and credit card interchange fee income earned by banks from the first quarter of 2008 through the second quarter of 2012. As discussed, Regulation II subjects covered issuers but not exempt issuers to the fee cap. This allows us to compare the incomes earned by covered and exempt banks before and after the fee cap’s effective date in the fourth quarter of 2011. All else being equal, the post- cap changes in income among the two groups can be inferred as the effect of the fee cap on interchange fee income earned by covered banks. Our estimates suggest that interchange fees collected by covered banks, as a percent of their assets, were about 0.007 to 0.008 percentage points lower than they otherwise would have been in the absence of the fee cap. For a bank with assets of $50 billion, this amounts to $3.5 million to $4 million in reduced interchange fee income. In comparison, Regulation II’s fee cap appears initially to have had a limited impact on exempt issuers. As we recently reported, initial data collected by the Federal Reserve indicate that card networks largely have adopted a two-tiered interchange fee structure after the implementation of Regulation II, to the benefit of exempt issuers. Data published by the Federal Reserve from 16 card networks show 15 of 16 card networks provided a higher interchange fee, on average, to exempt issuers than covered issuers after the rule took effect. The data further showed that the average interchange fee received by exempt issuers declined by $0.02, or around 5 percent, after the rule took effect—declining from $0.45 over the first three quarters of 2011 to $0.43 in the fourth quarter of 2011. Over the same period, the interchange fee as a percentage of the average transaction value for exempt issuers declined from 1.16 to 1.10 percent. Although the fee cap appeared to have a limited impact on exempt issuers, such issuers remain concerned about the potential for their interchange fee income to decline over the long term. For example, some have noted that (1) the prohibition on network exclusivity and routing restrictions may lead networks to lower their interchange fees, in part to encourage merchants to route debit card transactions through their networks; or (2) economic forces may cause networks not to maintain a two-tiered fee structure that provides a meaningful differential between fees for exempt and covered issuers. However, some merchants and others have noted that major card networks have adopted a two-tiered fee structure and have an incentive to maintain that structure to attract exempt issuers. Regulation II’s fee cap generally has reduced debit card interchange fees, which likely has resulted or will result in savings for merchants. According to the Federal Reserve and industry experts, the merchant acquirer market is competitive. Thus, the decrease in interchange fees likely has translated or will translate into lower merchant acquirer fees. Some noted that large merchants likely reaped immediate benefits from the fee cap, because their acquirer fees probably were reduced when interchange fees declined. In contrast, they noted that smaller merchants often opt for blended fee structures under which, for example, the merchants may be charged a flat fee per electronic payment transaction and, thus, not immediately receive the benefit of decreases in interchange fees because merchants may still be locked into contracts that have these fee structures.is expected to cause acquirer banks to adjust the fees they charge to merchants and pass on any savings to avoid losing merchant business. In either case, competition in the supply of acquirer services In its final rule, the Federal Reserve noted that merchants could be negatively affected if large issuers were able to persuade their customers to pay with credit cards rather than debit cards, since credit cards generally have higher interchange fees. While issuers can take this strategy, merchants also can provide incentives to consumers to encourage them to use debit cards instead of credit cards. The Dodd- Frank Act requires networks to allow merchants to offer discounts to consumers based on whether they pay by cash, check, debit card, or credit card. In addition, a recent report stated that an antitrust settlement between the Department of Justice and VISA and MasterCard requires the networks to loosen past restrictions on merchants’ ability to offer discounts to consumers based on the payment method, brand, and product. This allows merchants accepting cards by those networks to provide incentives to encourage customers to complete their debit transactions using their PIN rather than signature.data on whether issuers or merchants are engaging in such strategies. Some types of merchants may be adversely affected by Regulation II. As mentioned earlier, the fee cap generally led payment card networks to set their debit interchange fees at the level of the cap for covered issuers. However, the interchange fee for small-ticket transactions, or transactions that are generally under $15, was sometimes below the fee cap before Regulation II became effective. For example, according to the International Franchise Association and the National Council of Chain Restaurants, before Regulation II a $5 transaction could incur 11.75 cents in debit interchange fees. Under the current fee cap of 21 cents plus 0.05 percent of the transaction value, the interchange fee for a $5 covered transaction is 21.25 cents, about 80 percent higher. As a result, merchants that have a high volume of small value transactions, such as quick serving restaurants, transit authorities, and self-service and vending operators, could be worse off after the adoption of Regulation II. It is not practical to measure the extent to which consumers in the many markets where debit transactions are possible have been affected by Regulation II. First, one probable outcome is that at least a fraction of the merchants have passed some of their cost savings onto consumers. As noted by the Federal Reserve, whether merchants reduce their prices as a result of lower interchange fees will depend on the competitiveness of the various retail markets. In a competitive market with low margins, merchants likely have to pass on at least part of their cost savings to consumers. On the other hand, the loss in debit interchange fee income by large banks may lead them to seek ways to recover that lost income. As mentioned by the Federal Reserve, banks may try to recoup lost interchange fee income by introducing new bank service and product fees, possibly making banking services too costly for at least some customers. Our analysis (discussed previously) suggests that covered banks have recovered some of their lost interchange fee revenue, such as through increased revenue from service charges on deposit accounts. Historically, issuers have determined which and how many signature and PIN networks may process their debit card transactions. Before and after Regulation II, issuers generally use only one signature network (e.g., VISA or MasterCard) to process their debit card transactions that are completed using a signature. Additionally, as stated in the final rule, before Regulation II issuer banks, or in some cases, networks controlled the merchant routing of debit transactions. For example, an issuer bank could require a PIN transaction to be routed over a particular network, even if other PIN networks were available to route the transaction. The rule also states that, prior to Regulation II, issuer banks were able to limit the networks enabled on their cards through exclusive contracts with networks. For example, some issuers had agreed to restrict their cards’ signature debit functionality to a single signature debit network and their PIN debit functionality to the signature network’s affiliated PIN network. According to the Federal Reserve’s 2009 survey data of large issuers, most debit cards from large bank issuers carried only one PIN network, and the cards’ PIN and signature networks typically were affiliated with each other. Regulation II contains two provisions that serve to provide merchants with the option of selecting the network to process their debit card transactions and a greater number of network options. First, the rule prohibits all issuers and networks from inhibiting a merchant from directing the routing of a transaction over any network allowed by the issuer. This provision became effective on October 1, 2011. For example, if an issuer’s debit card has two or more PIN networks, the merchant rather than the issuer can chose which network processes a PIN transaction, such as the one charging the lowest interchange fee. Second, the rule prohibits all issuers and networks from restricting the number of networks over which debit transactions may be processed to fewer than two unaffiliated networks. This provision became effective on April 1, 2012. As a result, issuers no longer may allow only VISA’s or MasterCard’s signature and affiliated PIN networks to process their debit card transactions. Instead, such issuers would need to add an unaffiliated signature or PIN network if they do not already have an unaffiliated network. Regulation II’s prohibitions may have a limited impact on increasing competition and, in turn, lowering interchange fees, because issuers largely control which networks may process their debit card transactions. For example, issuers did not likely comply with Regulation II by adding a second unaffiliated signature network because, according to the final rule, networks and issuers stated it would be too costly to reconfigure cards and merchant equipment to enable the processing of two signature networks associated with one card.have only one network option for transactions completed by signature. Additionally, issuers can comply by having an unaffiliated signature network and PIN network, which means that merchants may have only one network routing choice once a customer decides to use her signature or her PIN. Therefore, even though Regulation II provides merchants with the authority to choose the network over which to route debit card transactions, merchants may not have a choice about which network to route the debit card transaction. Going forward, issuers may be able to act strategically to limit competition over debit card interchange fees through their control over which networks may process their debit card transactions. First, for covered transactions subject to the fee cap, both signature and PIN networks have an incentive to set their interchange fees at the fee cap. If a network lowered its fees below the cap, such as to attract merchant routing business, issuers using that network could replace it with a network that sets its fees at the cap. With networks charging similar interchange fees for covered transactions, merchants may not be able to use their network routing decisions to put downward pressure on such fees. Second, for exempt PIN transactions, merchants may be able to exert downward pressure on fees when issuers use two or more PIN networks to process their transactions. In this case, merchants can choose the network with the lowest fees and possibly induce the other networks to lower their fees. However, exempt issuers may be able to counter such pressure by dropping a network whose fees are too low or allowing only the PIN network (along with an unaffiliated signature network) with the highest fees to process their transactions. As discussed, merchants may be able to provide incentives to customers using cards issued by exempt banks to conduct a PIN rather than a signature transaction, so as to allow themselves more routing options. In response to Regulation II, VISA is undertaking strategies intended to attract merchant routing. First, VISA recently imposed a new monthly fixed acquirer fee that merchants must pay to accept VISA debit and credit cards. VISA also plans to reduce merchants’ variable fees so that merchants’ total fees associated with VISA transactions likely would be lower after the new fee structure’s implementation. Under its new fee structure, VISA could, for example, lower the interchange fees for VISA’s PIN network, Interlink, to attract merchant routing and make up at least some of its lost revenue by collecting the fixed fees. However, the extent to which VISA will be able to lower PIN debit interchange fees and gain transaction volume is limited. As with any network, if Interlink reduces its interchange fees too much, issuers could replace Interlink with another PIN network that offers higher fees. Second, according to VISA representatives, VISA’s signature network also is able to process PIN transactions, in essence automatically offering an additional PIN routing choice to merchants for cards that carry VISA For example, in the past, a debit card that carried the VISA signature.signature and two other PIN networks usually would process a PIN transaction through one of the PIN networks. Now, the VISA check card signature network can continue to be the only option for routing signature debit transactions on that card but also become a third option for routing PIN debit transactions. For VISA to gain PIN transaction volume through VISA check cards, however, it must set the associated interchange fees at or below the fees set by the other available PIN networks. However, the extent to which VISA can do this is not yet clear. If issuers experienced declining interchange fee revenue from their use of VISA, they could switch signature networks, for example, to MasterCard. We conducted an econometric analysis to assess the impact of the Dodd- Frank Act’s debit interchange fee standard on covered banks. Our multivariate econometric model used a difference-in-difference design that exploits the fact that some banks are automatically covered by the debit interchange fee requirements but others are not, so we can view covered banks as the treatment group and exempt banks as the control group. We then compared changes in various types of income earned by covered banks over time to changes in those types of income earned by exempt banks over time. All else being equal, the difference in the differences is the impact of the new debit interchange fee requirements. Our regression specification is the following: ybq = α + β + γCOVEREDbq + X’bqΘ + εbq, where b denotes the bank, q denotes the quarter, ybq is the dependent variable, α is an institution-specific intercept, β is a quarter-specific intercept, COVEREDbq is an indicator variable that equals 1 if bank b is covered by the debit interchange standard in quarter q and 0 otherwise, Xbq is a list of other independent variables, and εbq is an error term. We estimate the parameters of the model using quarterly data for banks for the period from the first quarter of 2008 to the second quarter of 2012. The parameters of interest are the γ, the coefficients on the covered bank indicators in the quarters after the treatment start date of the fourth quarter of 2011. The debit interchange standard was effective October 1, 2011, (the fourth quarter of 2011), so the covered bank indicator is equal to zero for all banks for all quarters from the first quarter of 2008 to the third quarter of 2011. For all quarters from the fourth quarter of 2011 to the second quarter of 2012, the covered bank indicator is equal to one for all covered banks and equal to zero for all exempt banks. Thus, for quarters from the fourth of 2011 to the second of 2012, all else being equal, the parameter γ measures the average difference in the dependent variable between covered and exempt banks in that quarter relative to the base quarter. We used lists of covered institutions provided by the Federal Reserve to identify which banks in our sample are required to comply with debit card interchange fee standards in each quarter and which are not. We assumed that any institution not explicitly identified as a covered institution was exempt. We used different dependent variables (ybq) in order to estimate the impacts of the debit interchange standard on various sources of income earned by covered banks, including bank card and credit card interchange fees, service charges on deposit accounts in domestic offices, total non-interest income, total interest income, and total income. Finally, we included size as an independent variable (Xbq) to control for factors correlated with size that may differentially affect exempt and covered banks in the quarters since debit interchange standard went into effect. We measured the size of a bank as the natural logarithm of its total assets. We included this variable to reduce the likelihood that our estimates of the impact of the debit interchange standard are reflecting something else. To assess the impact of debit interchange fee regulation on covered institutions, we analyzed commercial banks and savings banks (banks) for the period from the first quarter of 2008 to the second quarter of 2012 using data from the Federal Reserve, the Federal Deposit Insurance Corporation (FDIC), and the Federal Financial Institutions Examination Council (FFIEC). We excluded savings associations and credit unions from our analysis, even though they are subject to the debit card interchange fee standards. For much of the period we analyzed, savings associations filed quarterly Thrift Financial Reports, but these filings did not include the information we required for our analysis, such as income earned from bank card and credit card interchange fees, for every quarter. Similarly, credit union filings also do not include the information we required for our analysis. Table 14 shows the estimated differences in fees and income as a percent of assets for covered banks relative to what they would have earned in the absence of the debit interchange fee standard, all else being equal. Our estimates suggest that the debit interchange fee standard is associated with: Lower bank card and credit card interchange fees collected by covered banks. After the effective date, interchange fees collected by covered banks, as a percent of assets, were about 0.007-0.008 percentage points lower than they otherwise would have been. For a bank with assets of $50 billion, this amounts to $3.5 million-4 million in reduced bank card and credit card interchange fees. Higher service charges on deposit accounts in domestic offices for covered banks. After the effective date, service charges collected by covered banks, as a percent of assets, were about 0.004-0.007 percentage points higher than they otherwise would have been. For a bank with assets of $50 billion, this amounts to $2 million-3.5 million in additional service charges. No significant change in overall non-interest income for covered banks. Non-interest income—of which both interchange fees and service charges are components—earned by covered banks was about 0.09-0.13 percentage points lower as a percent of assets than it would have been in the first two quarters after the effective date and about 0.03 percentage points higher in the third quarter after the effective date. However, these estimates are not statistically significant at the 5-percent level. Increased interest income in the first two quarters after the effective date but no significant increase since. Interest income earned by covered banks, as a percent of assets, was about 0.03 percentage points higher than it would have been in the first two quarters after the effective date. It was 0.02 percentage points higher in the third quarter after the effective date, but this estimate is not statistically significant at the 5-percent level. No significant change in total income. Total income—which is composed of interest and non-interest income—earned by covered banks after the effective date, as a percent of assets, ranges from 0.10 percentage points lower to 0.05 percentage points higher, but these estimates are not statistically significant at the 5-percent level. To assess the robustness of our estimates, we examined different treatment start dates. Specifically, we allowed the debit fee standard to have an impact starting in the fourth quarter of 2010—1 year prior to the rule’s effective date—on banks that were covered in the fourth quarter of 2011. We did so to allow for the possibility that institutions began to react to the debit fee standard in anticipation of the rule being passed. Our estimates suggest that changes in covered banks’ interchange fee income and service charge income generally did not occur until after the effective date and also that significant changes in non-interest income, interest income, and total income for covered banks generally did not precede the rule’s effective date. Our approach allows us to partially differentiate changes in various types of income earned by covered banks associated with the debit interchange fee cap from changes due to other factors. However, several factors make isolating and measuring the impact of the cap for covered banks challenging. In particular, the effects of the cap cannot be differentiated from simultaneous changes in economic conditions, regulations, or other changes that may differentially affect covered banks. Nevertheless, our estimates are suggestive of the initial effects of the cap on covered banks and provide a baseline against which to compare future trends. In addition to the contact named above, Richard Tsuhara (Assistant Director), Silvia Arbelaez-Ellis, Bethany Benitez, William R. Chatlos, Philip Curtin, Rachel DeMarcus, Timothy Guinane, Courtney LaFountain, Thomas McCool, Marc Molino, Patricia Moye, Susan Offutt, Robert Pollard, Christopher Ross, Jessica Sandler, and Joseph Weston, made key contributions to this report.
The Dodd-Frank Act requires or authorizes various federal agencies to issue hundreds of rules to implement reforms intended to strengthen the financial services industry. GAO is required to annually study financial services regulations. This report examines (1) the regulatory analyses federal agencies performed for rules issued pursuant to the Dodd-Frank Act; (2) how the agencies consulted with each other in implementing the final rules to avoid duplication or conflicts; and (3) what is known about the impact of the Dodd-Frank Act rules. GAO identified 66 final Dodd-Frank Act rules in effect between July 21, 2011, and July 23, 2012. GAO examined the regulatory analyses for the 54 regulations that were substantive and thus required regulatory analyses; conducted case studies on the regulatory analyses for 4 of the 19 major rules; conducted case studies on interagency coordination for 3 other rules; and developed indicators to assess the impact of the act’s systemic risk provisions and regulations. Federal agencies conducted the regulatory analyses required by various federal statutes for all 54 regulations issued pursuant to the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act) that GAO reviewed. As part of their analyses, the agencies generally considered, but typically did not quantify or monetize, the benefits and costs of these rules. Most of the federal financial regulators, as independent regulatory agencies, are not subject to executive orders that require comprehensive benefit-cost analysis in accordance with guidance issued by the Office of Management and Budget (OMB). Although most financial regulators are not required to follow OMB's guidance, they told GAO that they attempt to follow it in principle or spirit. GAO's review of selected rules found that regulators did not consistently follow key elements of the OMB guidance in their regulatory analyses. For example, while some regulators identified the benefits and costs of their chosen regulatory approach in proposed rules, they did not evaluate their chosen approach compared to the benefits and costs of alternative approaches. GAO previously recommended that regulators more fully incorporate the OMB guidance into their rulemaking policies, and the Office of Comptroller of the Currency and the Securities and Exchange Commission have done so. By not more closely following OMB's guidance, other financial regulators continue to miss an opportunity to improve their analyses. Federal financial agencies continue to coordinate on rulemakings informally in order to reduce duplication and overlap in regulations and for other purposes, but interagency coordination does not necessarily eliminate the potential for differences in related rules. Agencies coordinated on 19 of the 54 substantive regulations that GAO reviewed. For most of the 19 regulations, the Dodd-Frank Act required the agencies to coordinate, but agencies also voluntarily coordinated with other U.S. and international regulators on some of their rulemakings. According to the regulators, most interagency coordination is informal and conducted at the staff level. GAO's review of selected rules shows that differences between related rules may remain even when coordination occurs. According to regulators, such differences may result from differences in their jurisdictions or the markets. Finally, the Financial Stability Oversight Council (FSOC) has not yet implemented GAO's previous recommendation to work with regulators to establish formal interagency coordination policies. Most Dodd-Frank Act regulations have not been finalized or in place for sufficient time for their full impacts to materialize. Recognizing these and other limitations, GAO took a multipronged approach to assess the impact of some of the act's provisions and rules, with an initial focus on the act's systemic risk goals. First, GAO developed indicators to monitor changes in certain characteristics of U.S. bank holding companies subject to enhanced prudential regulation under the Dodd-Frank Act (U.S. bank SIFIs). Although the indicators do not identify causal links between their changes and the act--and many other factors can affect SIFIs--some indicators suggest that since 2010 U.S. bank SIFIs, on average, have decreased their leverage and enhanced their liquidity. Second, empirical results of GAO's regression analysis suggest that, to date, the act may have had little effect on U.S. bank SIFIs' funding costs but may have helped improve their safety and soundness. GAO plans to update its analyses in future reports, including adding indicators for other Dodd-Frank Act provisions and regulations. GAO is not making new recommendations in this report but reiterates its 2011 recommendations that the federal financial regulators more fully incorporate OMB’s guidance into their rulemaking policies and that FSOC work with federal financial regulators to establish formal interagency coordination policies for rulemaking. The agencies provided written and technical comments on a draft of this report, and neither agreed nor disagreed with the report’s findings.
According to senior SBA officials in headquarters and the field, several aspects of the current organizational alignment contribute to the challenges faced by SBA management. The problem areas include cumbersome communication links between headquarters and field units; complex, overlapping organizational relationships; confusion about the district offices’ primary customer; and a field structure not consistently matched with mission requirements. According to the agency scorecard report for SBA, while SBA recognizes the need to restructure, little progress has been made to date. In response to our findings and additional challenges identified by OMB and the SBA Inspector General, SBA drafted a 5-Year Workforce Transformation Plan. The 1990s realignment—in which the regions were downsized, but not eliminated, and the Office of Field Operations was created, but never fully staffed—resulted in the cumbersome communication links between headquarters and field units according to senior SBA officials in headquarters and the field. The Office of Field Operations had fewer than 10 staff at the time of our review, and senior SBA officials told us that it would be impossible for such a small office to facilitate the flow of information between headquarters and district offices as well as was done by the 10 regional offices when each region had its own liaison staff. As a result, headquarters program offices sometimes communicate with the district offices directly and they sometimes go through the Office of Field Operations. To further complicate communication, the regional offices are still responsible for monitoring goals and coordinating administrative priorities to the district locations. Officials described how these multiple lines of communication have led to district staff being on the receiving end of conflicting or redundant requests. While some SBA officials felt that the regions had a positive effect on communication between headquarters and the districts, others felt that the regions were an unnecessary layer of management. The SBA Inspector General’s office found similar problems with communication within SBA when it conducted management challenge discussion groups with almost 50 senior officials from SBA headquarters, regional, and district offices. SBA has recognized that as it transforms itself, it needs to make the lines of communication between the districts, regions, and headquarters clearer to help bring about quick, effective decision-making. SBA plans to increase the responsibilities of the regional offices, perhaps by adding a career deputy regional administrator to assist the Regional Administrator in overseeing the district offices. Under SBA’s draft plan, the deputy would also work closely with the Office of Field Operations to coordinate program delivery in the field. We also found evidence of complex, overlapping organizational relationships, particularly among field and headquarters units. For example, district staff working on SBA loan programs report to their district management, while loan processing and servicing center staff report directly to the Office of Capital Access in headquarters. Yet, district office loan program staffs sometimes need to work with the loan processing and servicing centers to get information or to expedite loans for lenders in their district. Because loan processing and servicing centers report directly to the Office of Capital Access, requests that are directed to the centers sometimes must go from the district through the Office of Capital Access then back to the centers. District managers and staff said that sometimes they cannot get answers to questions when lenders call and that they have trouble expediting loans because they lack authority to direct the centers to take any action. Lender association representatives said that the lines of authority between headquarters and the field can be confusing and that practices vary from district to district. Figure 1 depicts the variety of organizational relationships we found between SBA headquarters and field units. SBA plans to eliminate the current complicated overlapping organizational relationships between field organizations and headquarters organizations by consolidating functions and establishing specific lines of authority. SBA’s draft transformation plan states that this effort will reduce management layers and provide a more efficient management structure. Specifically, SBA plans to further centralize loan processing, servicing, oversight, and liquidation functions; eliminate area offices for surety bonds and procurements by making regional or district offices responsible; and move oversight for entrepreneurial development programs to district offices. We found disagreement within SBA over the primary customer of the district offices. Headquarters executives said that the district offices primarily serve small businesses, while district office officials told us that their primary clients are lenders. The headquarters officials said that the role of the district office was in transition and that, because many lending activities had been centralized, the new role for the district offices was to work with small businesses. However, the district office managers said that their performance ratings were weighted heavily on aspects of loan activity. Moreover, there is only one program—8(a) business development—through which district offices typically work directly with small businesses, further reinforcing the perception of the district managers that lenders rather than small businesses are their primary customers. According to SBA’s transformation plan, the mission of its districts will become one of marketing SBA’s continuum of services, focusing on the customer, and providing entrepreneurial development assistance. SBA stated that over the next 5 years, it is fully committed to making fundamental changes at the district level, changes that have been discussed for years, but have never been fully implemented. To begin this change, SBA plans to test specific strategies for focusing district offices’ goals and efforts on outreach and marketing of SBA services to small businesses and on lender oversight in three offices during fiscal year 2002. SBA plans to implement the results in 10-20 districts in fiscal year 2003. As part of this change, SBA will need to carefully consider how the new mission of its district offices will affect the knowledge, skills, and abilities—competencies—district staff will need to be successful in their new roles. If competency gaps are identified, SBA will need to develop recruitment, training, development, and performance management programs to address those gaps. SBA managers said that, in some cases, the current field structure does not consistently match mission requirements. For example, the creation of loan processing and servicing centers moved some, but not all, loan- related workload out of the district offices. District offices retained responsibility for the more difficult loans and loans made by infrequent lenders. Similarly, the regional offices were downsized, but not eliminated during the 1990s. In addition, they said that some offices and centers are not located to best accomplish the agency’s mission. For example, Iowa has two district offices located less than 130 miles apart, and neither manages a very large share of SBA’s lending program or other workload. SBA also has a loan-related center located in New York City, a very high- cost area where it has trouble attracting and retaining staff. Figure 2 shows the locations of SBA offices around the country. SBA officials also stressed that congressional direction has played a part in SBA’s current structure. SBA officials pointed out that Congress has created many new offices, programs, aspects of existing programs, and pilot projects and has prescribed reporting relationship, grade, and/or type of appointment for several senior SBA officials. We found 78 offices, programs, or program changes that were created by laws since 1961, with most of the changes occurring in the 1980s and 1990s. Eleven SBA staff positions and specific reporting relationships were also required by law. In its transformation plan, SBA discusses its difficulty with matching its field structure with mission requirements and states that in order for the field structure to reflect the new mission and customer focus, consolidation of functions and the elimination or reduction of redundant offices may be necessary. The result of consolidations will be a streamlined organization with reduced management layers and an increased span of control for the field organizations that remain. For example, over the course of the 5-year plan, SBA plans to consolidate all loan processing, servicing, and liquidation into fewer centers, but give them an expanded role for handling all the functions currently carried out in the district offices. Integrating personnel, programs, processes, and resources to support the most efficient and effective delivery of services—organizational alignment—is key to maximizing an agency’s performance and ensuring its accountability. The often difficult choices that go into transforming an organization to support its strategic and programmatic goals have enormous implications for future decisions. Our work has shown that the major elements that underpin a successful transformation—and that SBA should consider employing—include strategic planning; strategic human capital management; senior leadership and accountability; alignment of activities, processes, and resources to support mission achievement; and internal and external collaboration. Proactive organizations employ strategic planning to determine and reach agreement on the fundamental results the organization seeks to achieve, the goals and measures it will set to assess programs, and the resources and strategies it will need to achieve its goals. Strategic planning is used to drive programmatic decision-making and day-to-day actions and, thereby, help the organization be proactive, able to anticipate and address emerging threats, and take advantage of opportunities, rather than remain reactive to events and crises. Leading organizations, therefore, understand that strategic planning is not a static or occasional event, but a continuous, dynamic, and inclusive process. Moreover, it can guide decision-making and day-to-day activities. According to the agency scorecard report, SBA has not articulated a clear vision of what role it should fill in the marketplace. In our review of SBA’s fiscal year 2000 performance report and fiscal year 2002 performance plan, we reported that we had difficulty assessing SBA’s progress in achieving its goals because of weaknesses in its performance measures and data.We said that SBA should more clearly link strategies to measurable performance indicators, among other things. SBA said it has made adjustments to its managing for results process and now has identified specific performance parameters that must be met. Additionally, SBA recognizes the need for its workforce transformation plan and 5-Year Strategic Plan to complement each other. People—or human capital—are an organization’s most important asset and define its character, affect its capacity to perform, and represent its knowledge base. We have recently released an exposure draft of a model of strategic human capital management that highlights the kinds of thinking that agencies should apply and steps they can take to manage their human capital more strategically. The model focuses on four cornerstones for effective human capital management—leadership; strategic human capital planning; acquiring, developing, and retaining talent; and results-oriented organizational cultures—and a set of associated critical success factors that SBA and other federal agencies may find useful in helping to guide their efforts. In its workforce transformation plan, SBA said that it recognizes that employees are its most valuable asset. It plans to emphasize the importance of human capital by clearly defining new agency functions and identifying and developing the skills and competencies required to carry out the new mission. SBA also plans, beginning in fiscal year 2002, to conduct a comprehensive skill and gap analysis for all employees. In addition, SBA will increase its emphasis on its two succession planning programs, the Senior Executive Service Candidate Development Program and the District Director Development Program, to recruit qualified individuals for future leadership roles. SBA also said that it plans to increase the number of professional development opportunities for employees to ensure that they can build missing competencies. The importance of senior leadership and commitment to change is essential. Additionally, high performing organizations have recognized that a key element of an effective performance management system is to create a “line of sight” that shows how individual responsibilities and day-to-day activities are intended to contribute to organizational goals. In addition to creating “lines of sight,” a performance management system should encourage staff to focus on performing their duties in a manner that helps the organization achieve its objectives. The SBA Administrator has demonstrated his commitment to transforming SBA by tasking his Deputy Administrator and Chief Operating Officer with coordinating the implementation of SBA’s 5-year workforce transformation plan. He also said that the transformation plan will complement the agency’s 5-Year Strategic Plan and that SBA’s successes will be measured by the successes of its clients. These are important steps in aligning expectations within the agency toward agency goals. As SBA begins to implement its transformation plan, it will also be important to be certain that agency goals are reflected in the performance objectives and ratings of SBA’s senior executives and the performance appraisal systems for lower-level employees. Sustained senior management attention to implementation of the plan and support from key internal and external stakeholders will be important ingredients in the ultimate success or failure of SBA’s transformation. An organization’s activities, core processes, and resources must be aligned to support its mission and help it achieve its goals. Leading organizations start by assessing the extent to which their programs and activities contribute to fulfilling their mission and intended results. They often find, as our work suggested, that their organizational structures are obsolete and that levels of hierarchy or field-to-headquarter ratios must be changed. Similarly, as priorities change, resources must be moved and workforces redirected to meet changing demands. According to the President’s Management Agenda, while SBA recognizes the need to restructure, little progress has been made to date and SBA has not translated the benefits of asset sales and technological improvements into human resource efficiencies. In response, SBA drafted a 5-Year Workforce Transformation Plan intended to adjust its programs and delivery mechanisms to reflect new ways of doing business and the changing needs of its clients. SBA said that it plans to continue with asset sales, to enhance technology by using contractors, and to use technology to move work to people—more of whom will be deployed at smaller facilities in the future. There is also a growing understanding that all meaningful results that agencies hope to achieve are accomplished through networks of governmental and nongovernmental organizations working together toward a common purpose. Internally, leading organizations seek to provide managers, teams, and employees at all levels the authority they need to accomplish programmatic goals and work collaboratively to achieve organizational outcomes. Communication flows up and down the organization to ensure that line staffs have the ability to provide leadership with the perspective and information that the leaders need to make decisions. Likewise, senior leaders keep the line staff informed of key developments and issues so that the staff can best contribute to achieving organizational goals. SBA has long understood the need for collaboration. In the late 1980s, SBA shifted its core functions of direct loan making and entrepreneurial assistance to reliance on resource partners to deliver SBA programs directly. This shift allowed SBA to greatly increase its loan volume and the number of clients served. However, SBA has lost much of its direct connection with its small business owner clients. SBA has only recently begun to develop the appropriate oversight tools for its resource partners and the appropriate success measures for its programs and staff. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Subcommittee may have at this time.
The Small Business Administration (SBA) has made organizational structure and service delivery changes during the past 10 years. However, ineffective lines of communication, confusion over the mission of district offices, complicated and overlapping organizational relationships, and a field structure not consistently matched with mission requirements all combine to impede SBA staff efforts to deliver services effectively. SBA's structural inefficiencies stem in part from realignment efforts during the mid-1990s that changed SBA's functions but left aspects of the previous structure intact, congressional influence over the location of field offices and centers, and legislative requirements such as specified reporting relationships. In response to GAO's findings and additional challenges identifies by the Office of Management and Budget and the SBA Inspector General, SBA recently announced a draft 5-year workforce transformation plan that discusses many of GAO's findings regarding the difficulties posed by its current structure. Organizational alignment is crucial if an agency is to maximize its performance and accountability. As SBA executes its workforce transformation plan, it should employ strategies common to successful transformation efforts both here and abroad. Successful efforts begin with instilling senior-level leadership, responsibility, and accountability for organizational results and transformation efforts. Organizations that have successful undertaken transformation efforts also typically use strategic planning and human capital management, alignment of activities, processes, and resources, and internal and external collaboration to underpin their efforts.
The PMAs—Alaska Power Administration (Alaska), Bonneville Power Administration (Bonneville), Southeastern Power Administration (Southeastern), Southwestern Power Administration (Southwestern), and Western Area Power Administration (Western)—were established from 1937 through 1977 to sell and transmit electricity generated mainly from federal hydropower facilities. With the exception of Alaska, the PMAs do not own or operate any of the power generation facilities. Most of these facilities were constructed and continue to be owned and operated by the Department of the Interior’s Bureau of Reclamation (Bureau) or the U.S. Army Corps of Engineers (Corps). The Bureau and the Corps constructed these facilities as part of a larger effort in developing multipurpose water projects that have other functions in addition to power generation, including flood control, irrigation, navigation, and recreation. The PMAs, with the exception of Southeastern, have constructed and continue to own and operate a combined total of nearly 33,000 miles of transmission lines to carry out the PMAs’ role in selling and transmitting electric power. Power sold by the five PMAs accounted for about 3 percent of all power generated in the United States in 1993. The PMAs vary widely in their operating characteristics and scope of activities. Alaska has 88 miles of transmission lines, and sales are limited to two areas in the State of Alaska. Southeastern owns no transmission facilities and relies on the transmission services of other utilities to transmit the power that it sells to customers in all or parts of 11 states. Southwestern is comparable to Southeastern in terms of sales volume but owns and operates about 1,400 miles of transmission lines in all or parts of six states. Western owns and operates over 16,000 miles of transmission lines serving customers in all or parts of 15 states. Bonneville owns and operates over 14,000 miles of transmission lines and sells to customers in all or parts of eight states. In 1994, Bonneville accounted for about 69 percent of total PMA revenues. Figure 1 presents the service area and fiscal year 1994 operating revenue for each PMA. In addition, table I.2 in appendix I shows some operating statistics including the amount of generating capacity used to generate the power sold by the PMAs, and the number of power plants, miles of transmission lines, and employees for each PMA, as of September 30, 1994. Each PMA has an administrator who is appointed by the Secretary of Energy. Each administrator is authorized to make decisions regarding the operation of the PMA, although the authority and duties of the administrator are subject to the supervision and direction of the Secretary. The administrators testify before the Congress on the PMAs’ budgets, which are submitted as part of DOE’s annual federal budget. DOE establishes each PMA’s personnel limits as part of DOE’s total personnel ceiling. The administrator also has authority to propose rate adjustments to meet projected revenue needs. Except for Bonneville, the Deputy Secretary of Energy is responsible for approving rate adjustments for the PMAs on an interim basis. The Federal Energy Regulatory Commission has authority over final approval for all of the PMAs’ rates. In addition, the administrators work with numerous federal, state, and local agencies on issues such as flood control, fish and wildlife protection, and irrigation. For example, Bonneville is required to work with the Pacific Northwest Electric Power and Conservation Planning Council, which the Congress created in 1980 to coordinate power planning and fish and wildlife protection in the Pacific Northwest, among other things. As required by law, all PMAs give preference in the sale of power to public power customers—customer-owned cooperatives, public utility and irrigation districts, and municipally owned utilities. Public power customers purchased about 63 percent of the power sold by the PMAs in fiscal year 1993. The remainder of the power is purchased by state and federal agencies and nonpreference customers, such as investor-owned utilities and industrial companies. Figure 2 shows the percentage of power sold by all the PMAs to each type of PMA customer during fiscal year 1993 in megawatt (MW) hours (MWh). (Table I.3 of app. I shows the quantity of power sold and associated revenues for all PMAs for each type of customer during fiscal year 1993.) As shown in figure 3, as a whole, public power customers are not dependent on the PMAs as their sole source of power. For example, as shown in figure 3, in fiscal year 1993, Bonneville’s public power customers obtained about 46 percent of their overall power needs from sources other than Bonneville, while Southeastern’s public power customers obtained about 95 percent of their total power needs from sources other than Southeastern. At the same time, however, some of the PMAs’ public power customers purchase a large percentage of their power from PMAs. For example, during fiscal year 1993, more than 80 percent of Bonneville’s public power customers obtained more than 75 percent of their total power needs from Bonneville. Table I.4 of appendix I shows the quantity of power purchased by public power customers from PMAs and the total quantity of power obtained by the same customers from all sources during fiscal year 1993. Table I.5 of appendix I shows the number of public power customers for each PMA and the percentage of the customers’ overall power needs that were purchased from the PMA. The Congress appropriates money each year to the PMAs for power-related purposes and to the federal operating agencies for both power and nonpower purposes. The PMAs, other than Bonneville, generally receive appropriations annually to cover operations and maintenance expenses and capital investments in their transmission assets.10, 11, In fiscal year 1994, the PMAs received about $328 million in appropriations. The operating agencies receive appropriations for all aspects of the multipurpose hydro projects, including operations, maintenance, and capital expenses related to power and also to other functions, such as irrigation and navigation. The operating agencies expended about $409 million on power-related operating and capital expenses and allocated these expenses to the PMAs for repayment during fiscal year 1994. The PMAs have no control over the amount of generation investment incurred by the operating agencies, which, by law, becomes repayable through rates charged by the PMAs. In 1974, the Congress stopped providing Bonneville with annual appropriations and instead provided it with a revolving fund maintained by the Treasury and permanent Treasury borrowing authority, now limited to $3.75 billion. However, Bonneville remains responsible for repaying its debt stemming from appropriations expended by Bonneville prior to 1974 and debt stemming from appropriations expended by the operating agencies on power-related expenses. Although most of Western’s projects are funded by appropriations, three projects—the Fort Peck Project, which is included in the Pick-Sloan Missouri Basin Program; the Colorado River Storage Project; and the Central Arizona Project—have revolving funds for operational, maintenance, and replacement costs. Western’s Boulder Canyon Project has permanent authority for the same types of costs as well as emergency expenditures. Nonfederal financing has been obtained for the Parker Dam, the Hoover Power Plant upratings, and the Buffalo Bill Power Plant. Nonfederal financing has been obtained for transmission construction through participation agreements with regional utilities. capital investments in generation facilities during the same fiscal year.Table II.1 of appendix II shows this same information for fiscal years 1985-94. Legislation requires the PMAs to set their power rates at the lowest possible level consistent with sound business principles. The PMAs do not set their rates to earn a profit. Instead, they attempt to generate revenues sufficient to recover all costs incurred as a result of producing, marketing, and transmitting electric power, including repayment of the federal investment and other debt with interest. DOE requires each PMA to annually prepare a repayment study to test the adequacy of their rates and to show, among other things, estimated revenues and expenses, estimated payments on the federal investment, and the total amount of federal investment to be repaid. The gross repayable investment assigned to be repaid by power revenues totaled nearly $34 billion, as of September 30, 1994. This amount includes $2.4 billion stemming from costs related to irrigation that Bonneville and Western must repay. PMAs had repaid about $11 billion (32 percent) of the gross repayable amount leaving more than $23 billion of outstanding debt, as of September 30, 1994. Figure 5 shows the gross repayable investment, the amount repaid, and the outstanding repayable investment (debt) for each PMA, as of September 30, 1994. Table II.2 in appendix II shows this information for fiscal years 1985-94. The federal dams from which the PMAs sell electricity also serve a variety of nonpower purposes including flood control, irrigation, navigation, and recreation. The PMAs seek to balance the concerns of the authorized competing uses of the projects in scheduling and delivering power to their customers. In addition to the $34 billion invested in generation and transmission facilities, another $9.5 billion in appropriations has been expended to date by the operating agencies for these nonpower purposes. Unlike the appropriations used for power generation and transmission, appropriations expended for nonpower purposes are not repaid through power-related revenues. Figure 6 shows the percentage of appropriations expended by the PMAs and the operating agencies for both power and nonpower purposes, as of September 30, 1994. Figure II.1 and table II.6 of appendix II show appropriations expended by the PMAs and the operating agencies. The PMAs generated about $3.2 billion in power-related revenues in fiscal year 1994. In accordance with legislation, the PMAs deposit their annual revenues in the Treasury. These receipts are generally applied to expenses in the following order: (1) operations and maintenance expenses, (2) purchased and exchanged power costs, (3) transmission service fees, (4) interest expense, and (5) any debt service on Treasury bonds (Bonneville only). Any remaining revenues are applied to any remaining balance due on unpaid or deferred annual expenses, if any, and then toward the repayment on the federal investment. DOE requires the PMAs to pay their highest interest-bearing debt first whenever possible, consistent with applicable law. The financial characteristics of the PMAs, in many respects, are a reflection of the various statutes and DOE policies and procedures that govern their operations. For example, except for Bonneville, the PMAs, as described earlier, receive appropriations annually to cover their operating and maintenance expenses and to finance capital investments. These financing methods differ from those used by investor-owned utilities. Such utilities generally pay for their operating expenses from operating revenue and finance capital investments by (1) issuing debt, (2) selling common or preferred stock, or (3) using cash generated from operations. In addition, the PMAs’ weighted average interest rates on their outstanding debt to the Treasury ranged from 2.7 to 4.6 percent in fiscal year 1994.This compares with an average interest rate of 8.1 percent on outstanding long-term debt for the nation’s 179 largest investor-owned utilities in 1993, according to a DOE report. These utilities accounted for more than 97 percent of all revenues earned by investor-owned utilities in 1993. As a comparison to the average cost of the PMAs’ debt in relation to the average cost of the Treasury’s debt, the Treasury’s weighted average interest rate on the outstanding marketable interest-bearing public debt was 6.9 percent as of July 31, 1995. As shown in figures 7 and 8, the PMAs’ financing methods and terms of repayment have led to a high amount of outstanding debt in comparison to total investment. These figures present two financial ratios that highlight the amount of debt that the PMAs have outstanding. The first ratio—debt to gross property, plant, and equipment—shows the outstanding portion of the PMAs’ debt, as a percentage of the total amount invested in these facilities. The second ratio—debt service to revenue—shows the amount of annual revenues used to pay principal and interest on outstanding debt (debt service) as a percentage of total revenues. Table II.5 of appendix II shows this information for each PMA during the period 1985-94. Because the PMAs’ debt is at low interest rates, four of the five PMAs have been able to carry high levels of debt without a corresponding increase in financial risk. However, as explained in the following discussion on competitive issues, high levels of debt currently pose problems for Bonneville and could pose problems for other PMAs in a more competitive environment. PMAs have been and generally remain among the sellers of wholesale electric power at the lowest cost. Their ability to operate as low-cost sellers stems from several factors, including the inherent low cost of hydropower relative to other generating sources, federal financing at relatively low interest rates, flexibility in the repayment of principal on the Treasury portion of the PMAs’ debt, the PMAs’ tax exempt status, and operating budgets that seek to break even rather than earn a profit or a return on investment. Partly because of these factors, the average revenue earned per unit of wholesale power sold by the PMAs is low in comparison to the national average for wholesale power sold by all utilities. The average revenue per kilowatt hour (kWh) sold by each PMA ranged from 1.2 to 2.5 cents in 1993. This was less than the national average for wholesale power in 1993 which, according to DOE’s Energy Information Administration, ranged from 3.3 to 4.1 cents, depending on the type of electric utility. The overall average was 3.6 cents. Figure 9 shows the average revenue earned per kilowatt hour of wholesale power sold for each PMA compared with the national average for wholesale power during fiscal year 1993. Table III.1 of appendix III shows (1) the total kilowatt hours of wholesale power sold and the associated power revenues by each PMA and (2) the nationwide total of kilowatt hours of wholesale power sold and associated revenues earned during fiscal year 1993. According to PMA officials, as of June 1995, with the exception of Bonneville, each PMA had rates that remain the lowest in its service area. These PMAs have experienced no major problems in terms of customers’ switching to other suppliers or having to negotiate new rates because of competition from other suppliers. On the other hand, Bonneville has experienced financial difficulty attributable to many factors including investments in nuclear plants. These difficulties coincide with other suppliers in Bonneville’s service area offering electric power service at rates at or below the rate at which Bonneville sells much of its power. Several of Bonneville’s customers have recently signed contracts with other suppliers, and other customers have indicated their willingness to negotiate with other suppliers. As noted in our 1994 report, Bonneville’s high debt and associated fixed costs and low financial reserves provide it with little flexibility to respond to any further operating losses, increasing the possibility that Bonneville would be unable to make its annual Treasury payment. The circumstances in which Bonneville finds itself are part of a larger trend in the wholesale segment of the electric power industry. This segment of the market has grown increasingly competitive in recent years, in part, because of industry changes stemming from the Energy Policy Act of 1992, which allows easier access to power generation markets and promotes greater use of electric transmission lines. As part of the trend toward competition, the Federal Energy Regulatory Commission expects nationwide wholesale rates to decline. Because the PMAs sell most of their power at wholesale, they could be directly affected by this trend. A PMA’s financial condition will play a role in determining whether it can compete with other suppliers. As mentioned earlier, a high debt-to-gross property, plant, and equipment ratio and a high debt service-to-revenue ratio could limit the flexibility of a PMA to match the rates of a competitor while still meeting financial obligations, including repayment of the federal investment. A PMA with rates below other suppliers’ rates in its service area has some flexibility to increase rates if necessary to meet its financial obligations. Conversely, a PMA with rates at or above the level offered by other suppliers in its service area, combined with a high level of debt, would have limited flexibility in reducing rates. We provided copies of a draft of this report to DOE for its review and comment. We received comments from DOE’s Bonneville Power Administration and DOE’s Power Marketing Liaison Office, which is responsible for the other four PMAs, and have included their comments and our response in this report as appendixes IV and V, respectively. Bonneville stated that our report was factually correct and fairly reflected its competitive situation. For the other four PMAs, DOE commented that our report implied that the PMAs were inefficient and used inappropriate operating techniques that could leave the PMAs in a precarious position in the future. We did not evaluate the efficiency of the PMAs’ operations and have drawn no such conclusion. DOE also commented that it is inappopriate to compare investor-owned utilities’ method of operating at a profit with that of the PMAs. Our report does not make this comparision. Rather, it compares the PMAs’ average interest rates and the PMAs’ method of financing capital investments with those of investor-owned utilities. DOE suggested that we include several additional facts in our report that it believed should help explain more fully how the PMAs operate. We have expanded certain descriptive data to include facts suggested by DOE. DOE also provided technical corrections and clarifications that we incorporated where appropriate. To develop the financial information presented in this report, we interviewed officials of each PMA and reviewed data from the five PMAs’ annual reports and financial statements for fiscal years 1985 through 1994. To develop certain financial indicators, we used applicable repayment studies and financial statements. As appropriate, we interviewed officials of each PMA and used the PMAs’ data to develop operating information on the PMAs and to discuss competitive issues. We did not independently verify the accuracy of the PMAs’ data. In developing operating information, we also used available data from sources, such as the Congressional Research Service, DOE, the Federal Energy Regulatory Commission, and the National Academy of Public Administration. We used data from the Energy Information Administration to develop information on the extent to which the PMA’s public power customers purchase PMA-provided power. We performed our review from April through September 1995, in accordance with generally accepted government auditing standards. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 7 days after the date of this letter. At that time, we will send copies to appropriate congressional committees, federal agencies, and other interested parties. We will also make copies available to others on request. This report was prepared under the direction of Victor S. Rezendes, Director of Energy and Science Issues in the Resources, Community, and Economic Development Division, who may be reached at (202) 512-3841 and Lisa Jacobson, Director of Civil Audits in the Accounting and Information Management Division, who may be reached at (202) 512-9508, if you have any questions. Major contributors to this report are listed in appendix VI. Generator nameplate capacity (MW) Fiscal year of initial operation (continued) Generator nameplate capacity (MW) Fiscal year of initial operation (continued) Generator nameplate capacity (MW) Fiscal year of initial operation (continued) Generator nameplate capacity (MW) Fiscal year of initial operation (Table notes on next page) APA = Alaska Power Administration IBWC = International Boundary and Water Commission MW = megawatt PMA = power maketing administration PWUA = Provo River Water Users’ Association SRP = Salt River Project WPPSS = Washington Public Power Supply System Bonneville acquired all or part of the generating capability of three nuclear power plants owned by WPPSS. One plant is in commercial operation, and two have been terminated. We do not include Southeastern’s 300-kilowatt Stonewall Jackson Project, which was energized in 1994. No power bills were issued for this project in fiscal year 1994. Power plant was not yet in commercial operation. Generator nameplate capacity (MW) Transmission lines (miles) Annual federal employment (full time equivalent) (continued) (continued) (continued) APA = Alaska Power Administration BPA = Bonneville Power Administration kWh = kilowatt hour MWh = megawatt hour PMA = power marketing administration SEPA = Southeastern Power Administration SWPA = Southwestern Power Administration WAPA = Western Area Power Administration Not applicable. lnterdepartmental sales are sales to the facilities that play an integral role in the operation of WAPA’s projects. Project use is mainly sales of electricity necessary to pump water at federal irrigation projects. By law, BPA first serves customers located in the Pacific Northwest (legislatively defined as Oregon, Washington, and portions of Montana, Nevada, Utah, and Wyoming). BPA sells electricity that is surplus to the needs of the Pacific Northwest to customers outside the region, mainly those located in California. In 1993, these customers included public- and investor-owned utilities and one federal agency. All of SEPA’s sales to federal agencies in 1993 were to the Tennessee Valley Authority. Public power customer purchases from PMA (MWh) Total power obtained from all sources (MWh) APA = Alaska Power Administration BPA = Bonneville Power Administration EIA = Energy Information Administration PMA = power marketing administration SEPA = Southeastern Power Administration SWPA = Southwestern Power Administration WAPA = Washington Public Power Supply System Eleven public power customers received power from two PMAs; these customers are included in the total number of customers for each PMA from which they purchased power. Other 0.1% Recreation 0.3% Fish & Wildlife 0.9% Municipal & Industrial 1.9% (Figure notes on next page) APA = Alaska Power Administration BPA = Bonneville Power Administration PMA = power marketing administration SEPA = Southeastern Power Administration SWPA = Southwestern Power Administration WAPA = Western Area Power Administration Percentages may not total 100 because of rounding. (19) 131.1 $(0.1) APA = Alaska Power Administration BPA = Bonneville Power Administration DOE = Department of Energy PMA = power marketing administration SEPA = Southeastern Power Administration SWPA = Southwestern Power Administration WAPA = Western Area Power Administration Operating agency appropriation amounts are estimates provided by WAPA. $19,764 $20,337 $20,088 $21,141 $21,456 $22,376 $23,648 $24,343 $25,331 14,904 15,237 15,243 15,409 15,911 16,113 16,410 17,279 17,640 $1,398 $1,406 $1,410 $1,419 $1,422 $1,428 $1,434 $1,442 $1,476 $4,032 $4,978 $4,714 $4,974 $5,021 $5,156 $5,335 $5,631 $5,891 APA = Alaska Power Administration BPA = Bonneville Power Administration PMA = power marketing administration SEPA = Southeastern Power Administration SWPA = Southwestern Power Administration WAPA = Western Area Power Administration The total outstanding repayable investment amounts for WAPA do not include deferred expenses. Deferred expenses totaled $238 million, as of September 30, 1994. (16.0) (0.3) (15.4) (1.9) (1.4) (1.2) 2,881.9 2,625.8 1,674.0 1,946.9 2,063.9 2,080.8 2,220.2 1,928.8 1,942.5 2,195.9 965.3 1,008.0 1,131.5 1,133.6 1,256.4 1,516.8 1,514.6 1,500.0 (64.9) (212.6) (273.6) (296.9) (60.7) (308.6) (373.5) (586.0) (580.8) (540.9) (264.2) 160. 9 (112.7) (409.6) (470.3) (11.5) (3.3) (29.5) (11.6) (99.4) (14.2) (4.9) (11.7) (41.3) (22.0) (18.1) (50.2) 1,291.0 1,456.2 1,689.9 1,774.6 1,760.0 1,715.3 1,564.1 1,451.9 (Table notes on next page) APA = Alaska Power Administration BPA = Bonneville Power Administration PMA = power marketing administration SEPA = Southeastern Power Administration SWPA = Southwestern Power Administration WAPA = Western Area Power Administration Accumulated net revenue (deficit) is as of September 30, 1994. Differences may occur in amounts as stated in the financial statements because of rounding. In fiscal year 1991, APA changed its method of computing depreciation on utility plants from the compound-interest to the straight-line method. The change was applied retroactively to utility plant additions of prior years. The cumulative effect of this change for years prior to 1991 was a decrease in the accumulated net revenue (deficit) of about $16.0 million. In fiscal year 1990, SEPA changed its method of computing depreciation on utility plants from compound-interest to the straight-line method. The change was applied retroactively to utility plant additions of prior years. The cumulative effect of this change for years prior to 1990 was a decrease in accumulated net revenues (deficit) by $138.2 million. The 1989 financial data for SEPA is as reflected in the 1989 financial statements. The 1989 financial data for SEPA were extracted from the 1989 financial statements in SEPA’s 1990 annual report. The financial statements were restated to reflect the change in the method of computing depreciation. SWPA’s financial data for fiscal years 1985, 1986, 1988, 1989, and 1990 were extracted from the restated financial statements in SWPA’s annual reports. In fiscal year 1990, SWPA changed its method for calculating depreciation on utility plants from the compound-interest to the straight-line method. The change was applied retroactively to utility plant additions of prior years. The cumulative effect of this change for years prior to 1990 was a decrease in accumulated net revenue (deficit) of about $114.4 million. Because of prior year adjustments or revenue transfers, the accumulated net revenues (deficit) for certain years may not equal the prior year’s balance in this account plus current year net revenue (deficit). WAPA’s financial data for fiscal year 1993 were extracted from the restated financial statements in WAPA’s 1994 annual report. In fiscal year 1993, WAPA changed its method of accounting for depreciation of utility plant assets from the compound-interest method to the straight-line method. The cumulative effect of this change for years prior to 1993 was a decrease in accumulated net revenues (deficit) by $1.054 billion. Debt to gross PP&E APA = Alaska Power Administration BPA = Bonneville Power Administration PMA = power marketing administration PP&E = property, plant, and equipment SEPA = Southeastern Power Administration SWPA = Southwestern Power Administration WAPA = Western Area Power Administration The ratio of debt to gross property, plant, and equipment was calculated by dividing outstanding repayable investment (debt) by gross property, plant, and equipment. The ratio of debt service to revenue was calculated by dividing principal debt repayments plus net interest expense by operating revenues. APA = Alaska Power Administration BPA = Bonneville Power Administration PMA = power marketing administration SEPA = Southeastern Power Administration SWPA = Southwestern Power Administration WAPA = Western Area Power Administration Not applicable. Average revenue (in cents per kilowatt hour sold) The following are GAO’s comments on the Power Marketing Liaison Office’s letter dated September 15, 1995. 1. The Power Marketing Liaison Office stated that our report implies that the PMAs generally use inefficient and inappropriate operating techniques that could leave them in a precarious position in the future. We disagree. Our report notes that the PMAs embody the various statutes and DOE policies that govern their procedures. Our report also points out that with the exception of Alaska, the PMAs do not own or operate the hydropower facilities from which they sell power nor do they have control over the amount of investment incurred by the agencies that operate and maintain the facilities. We did not attempt to assess the efficiency or appropriateness of the current operating techniques used by the PMAs or the operating agencies. 2. The Liaison Office stated that it is inappropriate to use investor-owned utilities’ methodology of operating for a profit as the only standard by which to judge the PMAs’ operations. Our report did not compare the fact that investor-owned utilities use a profit-based methodology with the fact that the PMAs’ are not allowed to earn a profit. We compared investor-owned utilities with the PMAs in two cases, both of which we believe are appropriate. First, concerning the manner in which the hydropower facilities and transmission assets were financed, we compared the PMAs’ cost of borrowing from the Treasury with investor-owned utilities’ cost of borrowing from private markets. We believe that this comparison allows the reader to independently assess the relative borrowing costs and potential financial advantages of PMAs versus private sector operations. Second, we explain that most capital investments in federal hydropower and transmission facilities are made through appropriations, which are essentially debt because they must be repaid through power revenues. We compare the PMAs’ method of financing with that of investor-owned utilities that can issue common or preferred stock in addition to debt. Because the PMAs cannot issue stock, it is reasonable to expect that they would have higher levels of debt than investor-owned utilities. We do not assess the levels of the PMAs’ debt in comparison to investor-owned utilities but rather in terms of competitive pressures and how the PMAs’ debt may affect their competitive situation. 3. The Liaison Office suggested several items that should be recognized in the report in order to avoid incorrect conclusions stemming from our comparison of PMAs with investor-owned utilities. The Liaison Office suggested that (1) the Congress never intended the PMAs to make a profit, (2) PMAs have a lower operating cost because their facilities were constructed at a time when construction costs were lower and the facilities have no fuel costs, (3) the PMAs’ high debt ratio results from the capital- intensive start-up costs associated with hydropower facilities and the longer service lives of these facilities and resultant longer repayment periods, and (4) the PMAs’ revenues can vary from year to year depending on water flow, and thus comparisons to nonhydro- based systems, such as those of investor-owned utilities, are misleading. First, our report acknowledges that the PMAs do not set their rates to earn a profit. Rather, they attempt to generate power revenues sufficient to cover all capital and operating costs. Second, although our report lists several reasons why the PMAs remain among the sellers of power at the lowest cost, our list was not intended to be exhaustive. Our intent was to inform the reader that, for many reasons the PMAs have been and generally remain among the sellers of power at the lowest cost. In addition, our report notes the inherent low cost of hydropower relative to other generating sources. Third, we do not compare the PMAs’ high levels of debt with the debt of investor-owned utilities. Instead, we explain how the PMAs’ debt, which is a fixed cost, may constrain the PMAs from adjusting to the increasingly competitive wholesale power markets in which they operate. Fourth, we do not compare any particular year’s revenues or generation of any of the PMAs with a nonhydro-based system of an investor-owned utility. Instead, our report notes that the PMAs’ revenues can vary depending on conditions, such as water flow, which may affect the amount of power that a PMA can sell. 4. The Liaison Office commented that the PMAs operate efficiently within congressional guidelines. The Liaison Office supported this comment by suggesting that the PMAs (1) normally return more funds to the Treasury than the annual congressional appropriations provided for the operating costs of the PMAs and the power-related costs of the operating agencies and (2) seek to balance the concerns of authorized competing uses of the projects and scheduling and delivering power to their customers. While PMAs may normally return more funds to the Treasury than they receive each year in annual appropriations, the repayment does not cover the Treasury’s interest expense associated with the PMAs’ debt. Second, our report notes that in addition to the $34 billion invested in power-related capital investments, more than $9.5 billion has been expended by the operating agencies for nonpower-related purposes, such as flood control, irrigation, and navigation. We revised our report to note that the PMAs must recognize and balance the concerns of these competing uses against the needs of their power customers. 5. We agree with DOE that because Bonneville accounts for the majority of the PMAs’ sales and revenues, its data tend to overshadow the other PMAs’ and may lead to inappropriate conclusions about all of the PMAs when the conclusions only apply to Bonneville. We have limited our presentation to factual material only. Our discussion of Bonneville’s competitive situation was not meant as a reflection on the other PMAs but instead was intended to show what can happen when a PMA with high fixed costs faces a competitive environment. Our report explains that as of the date of our report, the other PMAs were the low-cost sellers of power in their areas. 6. The Liaison Office commented that our report should reinforce the fact that the PMAs have no control over the amount of appropriations expended by the operating agencies for power generation equipment. We agree and have revised our report accordingly. 7. We agree with the Liaison Office that the two financial ratios we cite in our report (debt to gross property, plant, and equipment and debt service to revenue) should not be used alone to accurately assess the PMAs’ financial condition. We use these ratios only as indicators of the PMAs’ financial condition. However, for Bonneville, which now faces significant competition, the high debt service ratio is a critical indicator of its financial condition. Bonneville’s high debt and resultant fixed costs leave it with little flexibility to respond to competitive challenges. The substantial debt of the other PMAs is not currently a problem because they remain the sellers of power at the lowest cost in their service areas. However, competition is expected to result in a general decline in wholesale rates and, if they do not remain low-cost sellers, other PMAs could face a situation similar to Bonneville’s. We agree with the Liaison Office that the PMAs’ debt is at lower interest rates than those available today and that this has allowed PMAs to carry higher debt ratios without a corresponding increase in financial risk. However, as stated above, increased competition in wholesale power markets is a relatively new development and could pose serious challenges for each of the PMAs. 8. The scope of our review did not include an assessment of the quality of the power equipment employed by the PMAs. American Public Power Association. Selected Financial and Operating Ratios of Public Power Systems, 1992. Washington, D.C.: Mar. 1993. Audit of Bonneville Power Administration’s Management of Its Fish Recovery Projects. U.S. Department of Energy, Office of Inspector General. DOE/IG-0357. Washington, D.C.: Sept. 14, 1994. Bonneville Power Administration Business Plan 1995. U.S. Department of Energy, Bonneville Power Administration. DOE/BP-2664. Aug. 1995. The Bonneville Power Administration: To Sell or Not to Sell. U.S. Congressional Research Service. Washington, D.C.: Sept. 1986. BPA at a Crossroads. U.S. House of Representatives, BPA Task Force, Committee on Natural Resources. Washington, D.C.: May 1994. The Columbia River System: The Inside Story. U.S. Department of Energy, Bonneville Power Administration. DOE/BP-1689. Sept. 1991. Electric Trade in the United States, 1992. U.S. Department of Energy, Energy Information Administration. DOE/EIA-0531(92). Washington, D.C.: Sept. 12, 1994. Federal Energy Subsidies: Direct and Indirect Interventions in Energy Markets. U.S. Department of Energy, Energy Information Administration. SR/EMEU/92-02. Washington, D.C.: Nov. 1992. Financial Statistics of Major U.S. Investor-Owned Electric Utilities, 1993. U.S. Department of Energy, Energy Information Administration. DOE/EIA-0437(93)/1. Washington, D.C.: Jan. 1995. Financial Statistics of Major U.S. Publicly Owned Electric Utilities, 1993. U.S. Department of Energy, Energy Information Administration. DOE/EIA-0437(93)/2. Washington, D.C.: Feb. 1995. Fitch Research. “Fitch Competitive Indicator.” New York: Fitch Investors Service, L.P., Jan. 30, 1995. Garrison, K. and D. Marcus. “Changing the Current: Affordable Strategies for Salmon Restoration in the Columbia River Basin.” New York: Natural Resources Defense Council. Dec. 1994. Hearing: Review of the Proposed Sale of the Power Marketing Administrations (held on May 7, 1986). U.S. House of Representatives, Committee on Government Operations. Washington, D.C.: U.S. Government Printing Office, 1986. Hydroelectric Power Resources of the United States, Developed and Undeveloped, January 1, 1992. Federal Energy Regulatory Commission. Washington, D.C.: Jan. 1992. The Inspection of Power Purchase Contracts at the Western Area Power Administration. U.S. Department of Energy, Office of Inspector General. DOE/IG-0372. Washington, D.C.: May 9, 1995. Oversight Hearing: BPA Proposed Fiscal Year 1994 Budget (held in Washington, D.C., Apr. 28, 1993). Part I. U.S. House of Representatives, Task Force on Bonneville Power Administration, Committee on Natural Resources. Washington, D.C.: U.S. Government Printing Office, 1993. Oversight Hearing: BPA Electric Power Resources Acquisition (July 12, 1993). Part II. U.S. House of Representatives, Task Force on Bonneville Power Administration, Committee on Natural Resources. Oversight Hearing: BPA Columbia River Salmon Restoration (held in Boise, Idaho, Sept. 24, 1993). Part III. U.S. House of Representatives, Task Force on Bonneville Power Administration, Committee on Natural Resources. Washington, D.C.: U.S. Government Printing Office, 1994. Oversight Hearing: BPA Competitiveness (held in Eugene, Oregon, Sept. 25, 1993). Part IV. U.S. House of Representatives, Task Force on Bonneville Power Administration, Committee on Natural Resources. Washington, D.C.: U.S. Government Printing Office, 1994. Oversight Hearing: BPA Proposals (held in Washington, D.C., Oct. 28, 1993). Part V. U.S. House of Representatives, Task Force on Bonneville Power Administration, Committee on Natural Resources. Washington, D.C.: U.S. Government Printing Office, 1994. Power Marketing Administrations: A Time for Change? U.S. Congressional Research Service. Washington, D.C.: Mar. 7, 1995. President’s Private Sector Survey on Cost Control. Task Force of the President’s Private Sector Survey on Cost Control. Washington, D.C.: Aug. 31, 1983. Reinventing the Bonneville Power Administration. National Academy of Public Administration. Washington, D.C.: Dec. 1993. Scotto, D. and B. Chapman. “Electric Utilities Outlook: 1995 and Beyond.” New York: Bear Stearns, Jan. 1995. A Study of Power Marketing Administration Selected Financial Management Practices. U.S. Department of Energy. Washington, D.C.: Oct. 1988. A Study of Power Marketing Administration Selected Financial Management Practices, Appendices. U.S. Department of Energy. Washington, D.C.: Oct. 1988. Subsidies and Unfair Competitive Advantages Available to Publicly-Owned and Cooperative Utilities. Putnam, Hayes & Bartlett, Inc. Washington, D.C.: Sept. 1994. Bonneville Power Administration’s Power Sales and Exchanges (GAO/RCED-95-257R, Sept. 19, 1995). Bonneville Power Administration: Borrowing Practices and Financial Condition (GAO/AIMD-94-67BR, Apr. 19, 1994). Bonneville Power Administration: GAO Products Issued Since the Enactment of the 1980 Pacific Northwest Power Act (GAO/RCED-93-133R, Mar. 31, 1993). Federal Electric Power: Views on the Sale of the Alaska Power Administration Hydropower Assets (GAO/RCED-90-93, Feb. 22, 1990). Policies Governing Bonneville Power Administration’s Repayment of Federal Investment Still Need Revision (GAO/RCED-84-25, Oct. 26, 1983). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the Department of Energy's five power marketing administrations (PMA), focusing on operating, financial, and competitive issues facing PMA. GAO found that: (1) five PMA were established between 1937 and 1977 to sell and transmit electricity generated mostly from federal hydropower facilities; (2) PMA power accounted for about 3 percent of the power generated nationally in 1993; (3) each PMA owns and operates about 16,000 miles of transmission lines, serves customers in up to 15 states, and is headed by an appointed administrator that is authorized to make PMA operation decisions; (4) all PMA are required to give preferences in power sales to public power customers, but these customers are not dependent on PMA as their sole source of power; (5) although PMA operations and maintenance expenses and capital investments are covered by congressional appropriations, PMA are required to repay their transmission asset appropriations; (6) PMA are required to set their power rates to generate only enough revenue to recover costs; (7) PMA generated about $3.2 billion in power-related revenues in fiscal year 1994, but gross repayable investments totalled $34 billion as of September 1994; (8) as of September 1994, $23 billion of PMA cumulative debt was outstanding; (9) PMA are required to repay their debt and interest using revenues generated from power sales; and (10) although most PMA have been able to carry high levels of debt without an increase in financial risk, high levels of PMA debt could pose problems for PMA in a more competitive marketplace.
Influenza is more severe than some other viral respiratory infections, such as the common cold. Most people who contract influenza recover completely in 1 to 2 weeks, but some develop serious and potentially life- threatening medical complications, such as pneumonia. People aged 65 and older, people of any age with chronic medical conditions, children younger than 2 years, and pregnant women are generally more likely than others to develop severe complications from influenza. Vaccination is the primary method for preventing influenza and its more severe complications. Produced in a complex process that involves growing viruses in millions of fertilized chicken eggs, influenza vaccine is administered annually to provide protection against particular influenza strains expected to be prevalent that year. Experience has shown that vaccine production generally takes 6 or more months after a virus strain has been identified; vaccines for certain influenza strains have been difficult to mass-produce. After vaccination, it takes about 2 weeks for the body to produce the antibodies that protect against infection. According to CDC recommendations, the optimal time for vaccination is October through November, because the annual influenza season typically does not peak until January or February. Thus, in most years vaccination in December or later can still be beneficial. At present, two vaccine types are recommended for protection against influenza in the United States: an inactivated virus vaccine injected into muscle and a live virus vaccine administered as a nasal spray. The injectable vaccine—which represents the large majority of influenza vaccine administered in this country—can be used to immunize healthy individuals and those at highest risk for complications, including those with chronic illness and those aged 65 and older, but the nasal spray vaccine is currently approved for use only among healthy individuals aged 5 to 49 years who are not pregnant. Vaccine manufacture and purchase take place largely within the private sector: for the 2004–05 influenza season, two companies (one producing the injectable vaccine and one producing the nasal spray) manufactured vaccine for the U.S. market. Although vaccination is the primary strategy for protecting individuals who are at greatest risk of serious complications and death from influenza, antiviral drugs can also contribute to the treatment and prevention of influenza. Four antiviral drugs have been approved for treatment. If taken within 2 days after symptoms begin, these drugs can reduce symptoms and make someone with influenza less contagious to others. Three of the four antiviral drugs are also approved for prevention; according to CDC, they are about 70 to 90 percent effective for preventing illness in healthy adults. HHS has primary responsibility for coordinating the nation’s response to public health emergencies. As part of its mission, the department has a role in the planning needed to prepare for and respond to an influenza pandemic. One action the department has taken is to develop a draft national pandemic influenza plan, titled Pandemc In uenza Preparedness i fl and Response Pan, which was released in August 2004 for a 60-day l comment period. Within HHS, CDC is the principal agency for protecting the nation’s health and safety. CDC’s activities include efforts to prevent and control diseases and to respond to public health emergencies. CDC and its Advisory Committee on Immunization Practices (ACIP) recommend which population groups should be targeted for vaccination each year and, when vaccine supply allows, recommends that any person who wishes to decrease his or her risk of influenza-like illness be vaccinated. FDA, another HHS agency, also plays a role in preparing for the annual influenza season and for a potential pandemic. FDA is responsible for ensuring that new vaccines and drugs are safe and effective. The agency also regulates and licenses vaccines and antiviral agents. HHS has limited authority to control vaccine production and distribution directly; influenza vaccine supply and marketing are largely in the hands of the private sector. Although the Public Health Service Act authorizes the Secretary of HHS to “take such action as may be appropriate” to respond to a public health emergency, as determined and declared by the Secretary, it is not clear whether or to what extent the Secretary could directly influence the manufacture or distribution of influenza vaccine to respond to an influenza pandemic. The appropriateness of the Secretary’s response would depend on the nature of the public health emergency, for example on the available evidence relating to a pandemic. According to a senior HHS official involved in HHS emergency preparedness activities, manufacturers of vaccine for the U.S. market have agreed in principle to switch to production of pandemic influenza vaccine should the need arise and proper compensation and indemnification be provided; therefore, he said, it would probably be unnecessary for the federal government to nationalize vaccine production, although the federal government has the legal authority to do so if circumstances warrant it. For the 2004–05 influenza season, CDC estimated as late as September 2004 that about 100 million doses of vaccine would be available for the U.S. market. CDC and ACIP recommended vaccination for about 185 million people, including roughly 85 million people at high risk for complications. On October 5, 2004, however, one manufacturer announced that it could not provide its expected production of 46–48 million doses—roughly half of the U.S. supply of expected vaccine. Because a large proportion of vaccine produced by the other major manufacturer of injectable vaccine had already been shipped before October 5, 2004, about 25 million doses of injectable vaccine for high-risk individuals and others, and about 1 million doses of the nasal spray vaccine for healthy people, were available after the announcement to be distributed to Americans who wanted an influenza vaccination. Preparing for and responding to an influenza pandemic differ in several respects from preparing for and responding to a typical influenza season. For example, past influenza pandemics have affected healthy young adults who are not typically at high risk for complications associated with influenza, and a pandemic could result in an overwhelming burden of ill persons requiring hospitalization or outpatient medical care. In addition, the demand for vaccine may be greater in a pandemic. Challenges remain in planning for purchase and distribution of vaccine and defining priority groups in the event of a pandemic. HHS has not finalized planning for an influenza pandemic, leaving unanswered questions about the nation’s ability to prepare for and respond to such an outbreak. For the past 5 years, we have been urging HHS to complete its pandemic influenza plan. The document remains in draft form, although federal officials said in June 2005 that an update of the plan is being completed and is expected to be available in summer 2005. Key questions about the federal role in purchasing and distributing vaccines during a pandemic remain, and clear guidance on potential groups that would likely have priority for vaccination is lacking in the current draft plan. One challenge is that the draft pandemic plan does not establish the actions the federal government would take to purchase or distribute vaccine during an influenza pandemic. Rather, it describes options for vaccine purchase and distribution, which include public-sector purchase of all pandemic influenza vaccine; a mixed public-private system where public-sector supply may be targeted to specific priority groups; and maintenance of the current largely private system. The draft plan does not specifically recommend any of these options. According to the draft plan, the federal government’s role may change over the course of a pandemic, with greater federal involvement early, when vaccine is in short supply. Noting that several uncertainties make planning vaccination strategies difficult, the draft plan states that national, state, and local planning needs to address possible contingencies, so that appropriate strategies are in place for whichever situation arises. If public-sector vaccine purchase is an option, establishing the funding sources, authority, or processes to do so quickly may be needed. During the 2004–05 shortage, some state health officials reported problems with states’ ability, with regard to both funding and the administrative process, to purchase influenza vaccine. For example, during the effort to redistribute vaccine to locations of greatest need, the state of Minnesota tried to sell its available vaccine to other states seeking additional vaccine for their high-risk populations. According to federal and state health officials, however, certain states lacked the funding or authority under state law to purchase the vaccine when Minnesota offered it. In response to problems encountered during the 2004–05 shortage, the Association of Immunization Managers proposed in 2005 that federal funds be set aside for emergency purchase of vaccine by public health agencies and that cost not be a barrier in acquiring vaccine to distribute to the public. Although an influenza pandemic may differ from an annual influenza season, experience during the 2004–05 shortage illustrates the importance of having a distribution plan in place ahead of time to prevent delays when timing is critical: Collaborating with stakeholders to create a workable distribution plan is time consuming. After the October 5, 2004, announcement of the sharp reduction in influenza vaccine supply, CDC began working with the sole remaining manufacturer of injectable vaccine on plans to distribute this manufacturer’s remaining supply to providers across the country. The plan had two phases and benefited from voluntary compliance by the manufacturer to share proprietary information to help identify geographic areas of greatest need for vaccine. The first phase, which began in October 2004, filled or partially filled orders from certain provider types, including state and local public health departments and long-term care facilities. The second phase, which began in November 2004, used a formula to apportion the remaining doses across the states according to each state’s estimated percentage of the national unmet need. States could then allocate doses from their apportionment to providers and facilities, which would purchase the vaccine through a participating distributor. The state ordering process under the second phase continued through mid-January. Health officials in several states commented on the late availability of this vaccine; officials in one state, for example, remarked that the phase two vaccine was “too much, too late.” Identifying priority groups in local populations also takes time. Federal, state, and local officials need to have information on the population of the priority groups and the locations where they can be vaccinated to know how, where, and to whom to distribute vaccine in the event of an influenza pandemic. During the 2004–05 influenza season, federal officials developed a distribution plan to allocate a limited amount of vaccine, but the states also had to determine how much vaccine was needed and where to distribute it within their own borders. For example, state health officials in Florida did not know exactly how many high-risk individuals needed vaccination, so they surveyed long-term care facilities and private providers to estimate the amount of vaccine needed to cover high-risk populations. It took nearly a month for state officials to compile the results of the surveys, to decide how many doses needed to be distributed to local areas, and to receive and ship vaccine to the counties. Distributing the vaccine to a state or locality is not the same as administering the vaccine to an individual. Once vaccine has been distributed to a state or local agency, individuals living in those areas still need to be vaccinated. Vaccinating a large number of people is challenging, particularly when demand exceeds available supply. For example, during the 2004–05 influenza season, many places giving vaccinations right after the shortage was announced were overwhelmed with individuals wanting to be vaccinated. Certain local public health departments in California, including the Santa Clara County Public Health Department, provided chairs and extra water for people waiting in long lines outdoors in warm weather. Fear of a more virulent pandemic influenza strain could exacerbate such scenarios. A number of states reported that they did not have the capacity to immunize large numbers of people and partnered with other organizations to increase their capacity. For example, in 2004–05, according to state health officials in Florida, county health departments, including those in Orange and Broward Counties, worked with a national home health organization to immunize high-risk individuals by holding mass immunization clinics and setting up clinics in providers’ offices to help administer available vaccine quickly. Other locations, including the local health department in Portland, Maine, held lotteries for available vaccine; according to local health officials, however, administrative time was required to arrange and publicize the lottery. HHS’s draft pandemic plan does not define priority groups for vaccination, although the plan states that HHS is developing an initial list of suggested priority groups and soliciting public comment on the list. The draft plan instructs the states to define priority groups for early vaccination and indicates that as information about virus severity becomes available, recommendations will be formulated at the national level. According to the plan, setting priorities will be iterative, tied to vaccine availability and the pandemic’s progression. Without agreed-upon identification of potential priority groups in advance, however, problems can arise. During the 2004–05 season, for example, CDC and ACIP acted quickly on October 5, 2004, to narrow the priority groups for available vaccine, giving the narrowed groups equal importance. In some places, however, there was not enough available vaccine to cover everyone in these narrowed priority groups, so states set their own priorities among these groups. Maine, for example, excluded health care workers from the state’s early priority groups because state officials estimated that there was not enough vaccine to cover everyone in CDC and ACIP’s priority groups. Another challenge in responding to a pandemic will be to clearly communicate information about the situation and the nation’s response plans to public health officials, providers, and the public. Experience during the 2004–05 vaccine shortage illustrates the critical role communication plays when information about vaccine supply is unclear. Communicating a consistent message and clearly explaining any apparent inconsistencies. In a pandemic, clear communication on who should be vaccinated will be important, particularly if the priority population differs from those targeted for annual influenza vaccination, or if the priority groups in one area of the country differ from those in others. During the 2004–05 influenza season, health officials in Minnesota reported that some confusion resulted when the state determined that vaccine was sufficient to meet demand among the state’s narrower priority groups and made vaccine available to other groups, such as healthy individuals aged 50–64 years, earlier than recommended by CDC. Health officials in California reported a similar situation. State health officials pointed out that in mid-December, local radio stations in California were running two public service announcements—one from CDC advising those 65 and older to be vaccinated and one from the California Department of Health Services advising those 50 and older to be vaccinated. State officials emphasized that these mixed messages created confusion. Communicating information from a primary source. Having a primary and timely source of information will be important in a pandemic. In the 2004–05 influenza season, individuals seeking vaccine could have found themselves in a communication loop that provided no answers. For example, CDC advised people seeking influenza vaccine to contact their local public health department; in some cases however, individuals calling the local public health department would be told to call their primary care provider, and when they called their primary care provider, they would be told to call their local public health department. This lack of a reliable source of information led to confusion and possibly to high-risk individuals’ giving up and not receiving the protection of an annual influenza vaccination. Recognizing that different communication mechanisms are important and require resources. Another challenge in communicating plans in the event of a pandemic will be to ensure that the communication mechanisms used reach all affected populations. During the 2004–05 influenza season, public health officials reported the importance of different methods of communication. For example, officials from the Seattle–King County Public Health Department in Washington State reported that it was important to have a hotline as well as information posted on a Web site, because some seniors calling Seattle–King County’s hotline reported that they did not have access to the Internet. According to state and local health officials, however, maintaining these communication mechanisms took time and strained personnel resources. In Minnesota, for example, to supplement state employees, the state health department asked public health nursing students to volunteer to staff the state’s influenza vaccine hotline. Educating health care providers and the public about all available vaccines. For the 2004–05 season, approximately 3 million doses of nasal spray vaccine were ultimately available for vaccinating healthy individuals aged 5–49 years who were not pregnant, including some individuals (such as health care workers in this age group and household contacts of children younger than 6 months) in the priority groups defined by CDC and ACIP, yet some of these individuals were reluctant to use this vaccine because they feared that the live virus in the nasal spray could be transmitted to others. State health officials in Maine, for example, reported that the state purchased about 1,500 doses of the nasal spray vaccine for their emergency medical service personnel and health care workers, yet administered only 500 doses. Challenges in ensuring an adequate and timely supply of influenza vaccine and antiviral drugs—which can help prevent or mitigate the number of influenza-related deaths until an pandemic influenza vaccine becomes available—may be exacerbated during an influenza pandemic. Particularly given the time needed to produce vaccines, influenza vaccine may be unavailable or in short supply and may not be widely available during the initial stages of a pandemic. According to CDC, maintaining an abundant annual influenza vaccine supply is critically important for protecting the public’s health and improving our preparedness for an influenza pandemic. The shortages of influenza vaccine in 2004–05 and previous seasons have highlighted the fragility of the influenza vaccine market and the need for its expansion and stabilization. In its budget request for fiscal year 2006, CDC reports that it plans to take steps to ensure an expanded influenza vaccine supply. The agency’s fiscal year 2006 budget request includes $30 million for CDC to enter into guaranteed-purchase contracts with vaccine manufacturers to ensure the production of bulk monovalent influenza vaccine. If supplies fall short, this bulk product can be turned into a finished trivalent influenza vaccine product for annual distribution. If supplies are sufficient, the bulk vaccine can be held until the following year’s influenza season and developed into finished vaccines if the bulk products maintain their potency and the circulating strains remain the same. According to CDC, this guarantee will help expand the influenza market by providing an incentive to manufacturers to expand capacity and possibly encourage additional manufacturers to enter the market. In addition, CDC’s fiscal year 2006 budget request includes an increase of $20 million to support influenza vaccine purchase activities. In the event of a pandemic, before a vaccine is available or during a period of limited vaccine supply, use of antiviral drugs could have a significant effect. Antiviral drugs can be used against all strains of pandemic influenza and, because they can be manufactured and stored before they are needed, could be available both to prevent illness and, if administered within 48 hours after symptoms begin, to treat it. Like vaccine, antiviral drugs take several months to produce from raw materials, and according to one antiviral drug manufacturer, the lead time needed to scale up production capacity and build stockpiles may make it difficult to meet any large-scale, unanticipated demand immediately. HHS’ National Vaccine Program Office also reported that in a pandemic, the manufacturing capacity and supply of antiviral drugs is likely to be less than the global demand. For these reasons, the National Vaccine Program Office reported that analysis is under way to determine optimal strategies for antiviral drug use when supplies are suboptimal; the office also noted that antiviral drugs have been included in the national stockpile. HHS has purchased more than 7 million doses of antiviral drugs for the national stockpile. Nevertheless, this stockpile is limited, and it is unclear how much will be available in the event of a pandemic, given existing production capacity. Moreover, some influenza virus strains can become resistant to one or more of the four approved influenza antiviral drugs, and thus the drugs may not always work. For example, the avian influenza virus strain (H5N1) identified in human patients in Asia in 2004 and 2005 has been resistant to two of four existing antiviral drugs. The lack of sufficient hospital and workforce capacity is another challenge that may affect response efforts during an influenza pandemic. The lack of sufficient capacity could be more severe during an influenza pandemic compared with other natural disasters, such as a tornado or hurricane, or with an intentional release of a bioterrorist agent because it is likely that a pandemic would result in widespread and sustained effects. Public health officials we spoke with said that a large-scale outbreak, such as an influenza pandemic, could strain the available capacity of hospitals by requiring entire hospital sections, along with their staff, to be used as isolation facilities. In addition, most states lack surge capacity—the ability to respond to the large influx of patients that occurs during a public health emergency. For example, few states reported having the capacity to evaluate, diagnose, and treat 500 or more patients involved in a single incident. In addition, few states reported having the capacity to rapidly establish clinics to immunize or treat large numbers of patients. Moreover, shortages in the health care workforce could occur during an influenza pandemic because higher disease rates could result in high rates of absenteeism among workers who are likely to be at increased risk of exposure and illness or who may need to care for ill family members. Important challenges remain in the nation’s preparedness and response should an influenza pandemic occur in the United States. As we learned in the 2004–05 influenza season, when vaccine supply, relative to demand, is limited, planning and effective communication are critical to ensure timely delivery of vaccine to those who need it. HHS’s current draft plan lacks some key information for planning our nation’s response to a pandemic. It is important for the federal government and the states to work through critical issues—such as how vaccine will be purchased, distributed, and administered; which population groups are likely to have priority for vaccination; what communication strategies are most effective; and how to address issues related to vaccine and antiviral supply and hospital and workforce capacity—before we are in a time of crisis. Although HHS contends that agency flexibility is needed during a pandemic, until key federal decisions are made, public health officials at all levels may find it difficult to plan for an influenza pandemic, and the timeliness and adequacy of response efforts may be compromised. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For further information about this testimony, please contact Marcia Crosse at (202) 512-7119. Jennifer Major, Nick Larson, Gay Hee Lee, Kim Yamane, George Bogart, and Ellen W. Chu made key contributions to this statement. nluenza Pandemic: Challenges Reman in Preparedness. GAO-05-760T. I f Washington, D.C.: May 26, 2005. Flu Vaccine: Recent Supply Shortages Underscore Ongoing Challenges. GAO-05-177T. Washington, D.C.: November 18, 2004. Emergng Infecious Diseases: Revew of Sate and Federal Disease Surveillance Effors. GAO-04-877. Washington, D.C.: September 30, 2004. t nectous Disease Preparedness: Federal Chalenges in Responding to I f i Influenza Outbreaks. GAO-04-1100T. Washington, D.C.: September 28, 2004. Emergng Infecious Diseases: Asian SARS Outbreak Challenged I tnernational and Natonal Responses. GAO-04-564. Washington, D.C.: i April 28, 2004. Publc Heath Preparedness: Response Capac y mproving, bu Much Remains to Be Accomp shed. GAO-04-458T. Washington, D.C.: February li 12, 2004. Infectious Diseases: Gaps Remain in Survei ance Capabil ies o State and Local Agences. GAO-03-1176T. Washington, D.C.: September 24, 2003. i Severe Acute Respiraory Syndrome: Estabished Infectious Dsease Control Measures Helped Contain Spread, but a Large-Scale Resurgence May Pose Challenges. GAO-03-1058T. Washington, D.C.: July 30, 2003. SARS Outbreak: Improvemens to Pub c Health Capacity Are Needed for Responding o Bioerrorism and Emergng Infectous Diseases. GAO-03-tti 769T. Washington, D.C.: May 7, 2003. Infectious Disease Outbreaks: Bioterrorism Preparedness Efforts Have Improved Pub c HealhResponse Capacty, but Gaps Reman. GAO-03-t liii 654T. Washington, D.C.: April 9, 2003. Bioterrorism: Preparedness Vared across Stae and Local Jurisdictions. GAO-03-373. Washington, D.C.: April 7, 2003. Global Healh: Chalenges in Improving Infectious Disease Surve ance Systems. GAO-01-722. Washington, D.C.: August 31, 2001. ill Flu Vaccine: Steps Are Needed to Better Prepare for Possible Future Shortages. GAO-01-786T. Washington, D.C.: May 30, 2001. Flu Vaccne: Supply Probems Heighen Need o Ensure Access for High Risk People. GAO-01-624. Washington, D.C.: May 15, 2001. nluenza Pandemic: Pan Needed for Federal and State Response. GAO- I f 01-4. Washington, D.C.: October 27, 2000. West Nile Virus Outbreak: Lessons for Pubic Healh Preparedness. GAO/HEHS-00-180. Washington, D.C.: September 11, 2000. Global Health: Framework for Infectious Disease Surveillance. GAO/NSIAD-00-205R. Washington, D.C.: July 20, 2000. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Shortages of influenza vaccine in the 2004-05 and previous influenza seasons and mounting concern about recent avian influenza activity in Asia have raised concern about the nation's preparedness to deal with a worldwide influenza epidemic, or influenza pandemic. Although the extent of such a pandemic cannot be predicted, according to the Centers for Disease Control and Prevention (CDC), an agency within the Department of Health and Human Services (HHS), it has been estimated that in the absence of any control measures such as vaccination or antiviral drugs, a "medium-level" influenza pandemic could kill up to 207,000 people in the United States, affect from 15 to 35 percent of the U.S. population, and generate associated costs ranging from $71 billion to $167 billion in the United States. GAO was asked to discuss the challenges the nation faces in responding to the threat of an influenza pandemic, including the lessons learned from previous annual influenza seasons that can be applied to its preparedness and overall ability to respond to a pandemic. This testimony is based on GAO reports and testimony issued since 2000 on influenza vaccine supply, pandemic planning, emergency preparedness, and emerging infectious diseases and on current work examining the influenza vaccine shortage in the United States for the 2004-05 influenza season. The nation faces multiple challenges to prepare for and respond to an influenza pandemic. First, key questions about the federal role in purchasing and distributing vaccines during a pandemic remain, and clear guidance on potential priority groups is lacking in HHS's current draft of its pandemic preparedness plan. For example, the draft plan does not establish the actions the federal government would take to purchase or distribute vaccine during an influenza pandemic. In addition, as was highlighted in the nation's recent experience responding to the unexpected influenza vaccine shortage for the 2004-05 influenza season, clear communication of the nation's response plan will be a major challenge. During the 2004-05 influenza season, state health officials reported that mixed messages created confusion. For example, CDC advised vaccination for persons aged 65 and older, and at the same time a state advised vaccination for persons aged 50 and older. Further challenges include ensuring an adequate and timely supply of influenza vaccine and antiviral drugs, which can help prevent or mitigate the number of influenza-related deaths. Particularly given the length of time needed to produce vaccines, influenza vaccine may be unavailable or in short supply and might not be widely available during the initial states of a pandemic. Finally, the lack of sufficient hospital and health care workforce capacity to respond to an infectious disease outbreak may also affect response efforts during an influenza pandemic. Public health officials we spoke with said that a large-scale outbreak, such as an influenza pandemic, could strain the available capacity of hospitals by requiring entire hospital sections, along with their staff, to be used as isolation facilities.
NIH’s extramural research funding efforts reflect its large, decentralized organization. Twenty-four of the 27 ICs fund extramural research, each with a separate appropriation, and these ICs make final decisions on which extramural research projects to fund following a standard peer review process defined by law and NIH policy. As the central office at NIH, the OD establishes NIH policy and is responsible for overseeing the ICs, including their extramural research funding efforts, to ensure that ICs operate in accordance with NIH’s policies. NIH is required by law to use a peer review system in its process for making extramural grant awards. In September 2009 we described this peer review system as two sequential levels of peer review by panels of experts in various fields of research that help NIH identify the most promising extramural grant applications to fund, as defined primarily by an assessment of the applications’ scientific merit. Initial peer review groups conduct NIH’s first level of peer review. These groups review the applications assigned to them and assess their scientific merit, using criteria that require reviewers to examine such components as a grant application’s design and methodology, innovation, and scientific significance. Using these criteria, the initial peer review groups assign a priority score to the applications they review, which are used to rank the applications from among those in the cohort of applications. After the applications are scored and ranked, the information is forwarded to the appropriate IC—based on the applications’ proposed area of research—for the second level of peer review. Each IC that funds extramural research has its own advisory council, which conducts the second level of NIH’s peer review. Advisory councils consist of no more than 18 voting members, two-thirds of whom are scientists in the research areas of the IC and one-third of whom are leaders of nonscience fields. Under law and NIH policy, the advisory councils are responsible for reviewing the applications and their priority scores and, based on this review, recommending or not recommending to the ICs certain applications for funding consideration. The advisory councils’ recommendations conclude NIH’s peer review process. After NIH’s peer review process has been concluded, the director of each IC is responsible for considering the recommendations of the advisory council and for making final extramural funding decisions. In general, NIH makes extramural grant award decisions based on scientific merit, relevance to the IC’s scientific priorities, and the availability of funds appropriated to each IC. As noted previously, the scientific merit of extramural grant applications is determined by NIH’s peer review system and reflected in the applications’ priority scores. Each of the ICs focuses on specific scientific priorities. To aid in grant funding decisions, each IC establishes a funding line—known as the payline—which is determined by the number of extramural grant applications the IC anticipates funding that year. The payline for any given year is based on projections of the total funding available at the IC that year for grants, the average dollar amount expected to be awarded per application, and the number of applications received by the IC. While IC directors typically fund applications that fall within the payline, they are not required to fund applications based strictly on the applications’ priority scores or the payline. After the ICs determine which extramural grant applications to fund, they must also determine the specific award amount and the length of the grant project. Determining the specific award amount may involve negotiation between NIH and the grant applicant, as well as the submission of additional documentation by the grant applicant prior to awarding the grant. For example, NIH may ask an applicant to reduce the scope and the proposed budget for the grant application if the IC does not have sufficient funds to provide 100 percent of the funding requested by the grant applicant. NIH grants may be funded for up to a 5-year project period, with funding for each year contingent on the availability of funds and satisfactory progress of the project. NIH’s ICs may provide additional funding to current active grants through both administrative supplements and competitive revisions. The ICs award administrative supplements using an administrative review process at the IC level. Administrative supplements award additional funds during the current project period to an existing extramural grant award that was previously peer reviewed—for example, by allowing grantees to add personnel or purchase additional equipment. All additional costs must be within the scope of the peer reviewed and approved project. Competitive revisions are funds added to existing extramural grant awards in order to support new research objectives or other changes in scope. Like the original grant award to which they are added, competitive revisions are awarded using NIH’s peer review process. In fiscal year 2009, NIH received over 26,000 non–Recovery Act grant applications for R01-equivalent grants—R01 grants are the most common type of extramural grant applications—and 22 percent of these applications were funded for an average of $391,000. NIH awards grants in all 50 states, Washington, D.C., and other territories and possessions of the United States, and to foreign institutions and international organizations. In fiscal year 2009 NIH awarded about two-thirds of all non–Recovery Act grant funds, including extramural research grant funds, to institutions in 10 states: California, Illinois, Maryland, Massachusetts, New York, North Carolina, Ohio, Pennsylvania, Texas, and Washington. NIH used its standard review processes—peer review or administrative review—to make extramural grant awards with its Recovery Act funding. NIH selected grant applications for Recovery Act funding based on NIH standard review criteria, as well as three criteria for Recovery Act grants. In order to make extramural grant awards in fiscal years 2009 and 2010, NIH used its standard review processes. These standard processes were used to review three categories of applications for Recovery Act–funded extramural grants, namely (1) new grant applications received from Recovery Act funding announcements; (2) existing grant applications that NIH received prior to the Recovery Act, but did not fund; and (3) applications for administrative supplements and competitive revisions to current active grants. Specifically, NIH followed its standard peer review process, including review by an initial peer review group and an IC advisory council, to evaluate new grant applications submitted in response to Recovery Act–specific funding opportunity announcements. Specifically, Challenge and Grand Opportunity (GO) grants were developed for Recovery Act funding. For existing grant applications that had not previously received NIH funding, the three ICs we reviewed set a new payline to guide selection of existing grant applications for Recovery Act funding. These applications had been submitted for NIH funding from annual appropriations prior to the Recovery Act, and had already been reviewed and determined to be scientifically meritorious using NIH’s peer review process. According to NIH officials, most grant applications that fell within the new payline set by the ICs were selected for renegotiation to reduce the projects’ proposed objectives, scope, and budget. These renegotiations were required because most grant applications were originally submitted for more than 2 years of funding while NIH generally limited grants under the Recovery Act to projects requiring 2 years or less to complete. NIH and IC officials reported that grant management and program staff ensured that grant applications remained scientifically meritorious when they rescoped 4- year grant applications down to 2 years, but did not assign new priority scores to them. NIH also followed its standard review processes in awarding administrative supplements and competitive revisions to current active grants. For applications for administrative supplements to current active grants, ICs conducted an administrative review of the supplemental request for grant funding. Administrative supplements provide additional funding to existing extramural grant awards that were previously peer reviewed. For competitive revisions to current active grants, ICs conducted a standard peer review of the new grant application. NIH based funding decisions for all Recovery Act extramural grant awards in fiscal years 2009 and 2010 on the three standard criteria NIH uses to award extramural grants, plus three additional criteria established by NIH. The three standard NIH criteria are scientific merit, availability of funds, and relevance to IC scientific priorities: Scientific merit—NIH considered the design and methodology, innovation, and scientific significance of each grant application using the scientific merit priority scores assigned to new grant applications, existing grant applications that had not previously received NIH funding, and competitive revisions to current active grants. Administrative supplements were awarded to current active grants that had been previously peer reviewed. Availability of funding—the number of extramural grant applications that could receive Recovery Act funding was determined by the funding available to each IC, which was specified by the Recovery Act to be in proportion to each IC’s fiscal year 2009 appropriation. Relevance to scientific priorities—grant applications were evaluated to determine their relevance to the scientific priorities of the awarding IC. In addition to the three standard NIH criteria, the three ICs we reviewed considered three additional criteria established by NIH—geographic distribution of Recovery Act funds, the potential for job creation, and the potential for scientific progress within 2 years. Guidance by the OD to all ICs encouraged—but did not require—the ICs to consider these three criteria when making Recovery Act funding decisions. The guidance identified the following: Geographic distribution of the Recovery Act funds—ICs were encouraged to consider making awards to grantees in states in which the aggregate success rate for applications to NIH has historically been low. NIH encouraged this geographic distribution in order for NIH Recovery Act funds to have the widest effect across the nation and help state and local fiscal stabilization. Potential for job creation—ICs were also encouraged to consider funding extramural grant applications that had the potential to preserve and create jobs—a main purpose of the Recovery Act. In evaluating applications for administrative supplements, one of the ICs we reviewed gave preference based in part on the number of jobs the supplement was projected to create or retain. Potential for making scientific progress in 2 years—ICs were encouraged to select grant applications for Recovery Act funding in instances where IC officials determined that the applicant had the potential to make scientific progress within a 2-year period, as opposed to the longer duration grant. NIH’s Recovery Act extramural grant awards varied across three categories—awards for applications that had previously been reviewed but had not received funding, awards for new grant applications, and awards for administrative supplements and competitive revisions to current active grants. These awards also varied in size, duration, and research methods, with grantees clustered in certain states and cities. NIH and the ICs communicated a variety of information to the public about the grant awards—including information about grantees—through NIH’s Web sites. GAO’s analysis of NIH data show that NIH Recovery Act grant awards varied across three grant categories, with significant further variation in the specific distribution of awards across these three grant categories at the three ICs we reviewed. As of April 2010, NIH used about $7 billion of its $8.6 billion in Recovery Act scientific research funds and CER funds to make over 14,000 extramural grants awards. Specifically, NIH used nearly $2.7 billion of Recovery Act funding for grant applications that had previously been peer reviewed by NIH but had not received NIH funding; slightly over $2.4 billion for new grant applications received from Recovery Act funding announcements; and about $1.9 billion for administrative supplements and competitive revisions to current active grants. The distribution of Recovery Act awards among the three categories of extramural grants varied significantly across the three ICs we reviewed. For example, we found that as of April 2010, NIAID used 69 percent of its Recovery Act funds for existing grant applications that had not previously received NIH funding, while NCI used 31 percent of its Recovery Act funds for existing grant applications that had not previously received NIH funding. In contrast, NHLBI used 51 percent of its Recovery Act funds for new grant applications from Recovery Act funding opportunity announcements, while NIAID used 5 percent of its Recovery Act funding for new grant applications. (See fig. 1 for distribution of Recovery Act awards among the three categories of grants at the three ICs we reviewed.) National Heart, Lung, and Blood Institute (NHLBI) National Institute of Allergy and Infectious Diseases (NIAID) GAO’s analysis of NIH data also show that as of April 2010, the 14,152 extramural grant awards NIH made with Recovery Act funds varied in the size of the grant award, award duration, and research methods, with grantees clustered in certain states, cities, and universities. (See app. I for illustrative examples of 45 extramural grant awards made with Recovery Act funds.) Grant Award Size: As of April 2010, we found that the average Recovery Act extramural grant award was slightly more than $492,000, while about 25 percent of grants were awarded $623,000 or more. The median size of Recovery Act grant awards was nearly $250,000, and Recovery Act grant awards ranged in amount from $3,000 to about $29.6 million. NIH awarded 1,259 Recovery Act extramural grants of $1 million or more, of which 86 were for $5 million or more. Award duration: According to NIH officials, most Recovery Act extramural grants were for durations of 2 years or less at the three ICs we reviewed, but a few Recovery Act extramural grants were for durations longer than 2 years. ICs generally limited their Recovery Act extramural grant durations to 2 years or less in order to fund these grants with Recovery Act funding, which is available for obligation until September 30, 2010. However, NIH granted ICs the flexibility to fund longer-term extramural grants using Recovery Act funds for the first 2 years and annual appropriations for additional years, if the grant is consistent with the IC’s priorities. For example, officials at one IC reported that because some early-stage principal investigators may require more than 2 years to demonstrate success in their chosen field of study, the IC offered longer- term awards to these investigators that were partially funded with Recovery Act funds and that it expects will be partially funded in subsequent years with annual appropriations. NIH officials explained that a potential “cliff effect,” or sharp reduction in application success rates—the percentage of grant applications that receive NIH grant funding—could result beginning in fiscal year 2011 when Recovery Act funds are no longer available for grants. According to NIH officials, the “cliff effect” could potentially occur in two ways. First, recipients of 2-year Recovery Act awards may apply for additional funding to extend their projects—potentially increasing the number of grant applications in future years. Second, officials at two of the ICs we reviewed reported that they committed to supporting grants for a duration longer than 2 years using annual appropriations in fiscal year 2011, which may reduce the amount of funds that will be available to make new grant awards. NIH officials reported that the possible increase in applications resulting from the completion of the Recovery Act awards will be staggered across the next few years, and one official reported that the agency will continue to make decisions about funding research that meet their standard criteria. Research Methods: NIH officials reported that NIH used Recovery Act funds to make grants for projects with a variety of research methods, such as clinical trials. NIH officials also reported that while NIH does not track all forms of research methods, the research methods used in connection with the over 14,000 extramural grants awarded using Recovery Act funds were similar to the research methods used in connection with the extramural grants funded using annual appropriations. The officials explained that the data available for fiscal year 2009 indicate that extramural research grants funded under the Recovery Act had similar research methods, and were awarded in roughly the same proportions as extramural grants funded with annual appropriations. NIH officials also reported that the agency has no general policy regarding which scientific methods should be supported using Recovery Act funds or annual appropriations, and that NIH left these decisions to the ICs. Officials at one IC reported that the IC excluded grant applications involving long-term clinical trials using human subjects or long-term studies involving animal subjects from consideration for Recovery Act funding because Recovery Act funding was generally used for shorter-term grants—that is, grants where the specific aims or scope could be accomplished within the 2-year duration of the award. Geographic distribution: Consistent with the pattern of grants funded with annual appropriations in fiscal year 2009, the NIH Recovery Act grantees were clustered in certain states. Of the over 14,000 Recovery Act extramural grants awarded as of April 2010 six states—California, Massachusetts, New York, North Carolina, Pennsylvania, and Texas—accounted for 50 percent of awards; six cities—Baltimore, Boston, Los Angeles, New York, Philadelphia, and Seattle—accounted for over 25 percent of awards; and five universities received over 10 percent of awards—Duke University, Johns Hopkins University, University of Michigan at Ann Arbor, University of Pennsylvania, and University of Washington. NIH communicated various information to the public about the extramural grant awards it made using Recovery Act funds. Information on Recovery Act extramural grant awards was communicated to the public through existing and new Recovery Act–specific Web pages. For example, NIH made information about Recovery Act extramural grants available through its existing Research Portfolio Online Reporting Tools (RePORT) system, an NIH Web-based reporting tool. The RePORT system contains information on both Recovery Act and non–Recovery Act extramural grants. For example, the site includes reports, analysis, and data on NIH research activities, such as the fiscal year of the award, the location of grantee, and awarding IC. (See app. I for extracts of information provided by NIH about extramural grants that were awarded Recovery Act funds, including information from NIH’s Web Site.) In addition to the existing Internet Web sites, NIH and the ICs also developed Recovery Act–specific pages on their Web sites to disseminate information about Recovery Act grants, including extramural grants. For example, NIH highlighted information on Recovery Act–funded extramural grants—on major topics of interest to the public and groups involved in biomedical research funding—available through NIH Recovery Act reports on NIH’s Internet Web site. NIH and the ICs posted background stories on particular projects and principal investigators on their Web sites. NIH also made press releases available about Recovery Act–funded projects through the NIH Recovery Act news releases page on its Web site. A draft of this report was provided to HHS for review and comment. HHS provided technical comments that were incorporated as appropriate. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to other interested congressional committees, the Secretary of HHS, and the Director of NIH. This report will also be available on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact Linda T. Kohn at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. The 45 grants presented below include a sample of 15 extramural grants awarded with American Recovery and Reinvestment Act of 2009 (Recovery Act) funds from each of the three Institutes and Centers (IC) we reviewed—National Cancer Institute (NCI), National Heart, Lung, and Blood Institute (NHLBI), and National Institute of Allergy and Infectious Diseases (NIAID). For each IC, the 15 grants were randomly selected from the three different categories of grant applications—new grant applications from Recovery Act funding opportunity announcements, existing grant applications that had not previously received National Institutes of Health (NIH) funding, and administrative supplements and competitive revisions to current active grants. The number of grants selected in each grant category was in proportion to the amount of Recovery Act funding awarded for each category by the IC, and are not representative of all Recovery Act extramural grant awards. Grants were assigned categories as follows: New applications—New grant applications from Recovery Act funding opportunity announcements. Existing applications—Existing grant applications that had not previously received NIH funding. Supplements and revisions—Administrative supplements and competitive revisions to current active grants. The information presented in this appendix about each of the Recovery Act extramural grants was provided by NIH. In particular, the grant project titles, administering IC, grantee organization, and abstract descriptions were reprinted from information supplied by NIH. We did not edit them in any way, such as to correct typographical or grammatical errors in the abstract descriptions. We calculated the grant award size reported for each of the 45 grants from NIH information on Recovery Act funds. The grant awards ranged from about $13,000 to about $7.2 million. CSHL Molecular Target Discovery and Development Center Cold Spring Harbor Laboratory P.O. Box 100 Cold Spring Harbor, NY 11724 In this application we describe our plans to create a Molecular Target Discovery and Development Center (MTDDC) that will act as downstream component of The Cancer Genome Atlas (TCGA) project. Our premise is that the complexity of cancer genome alterations leads directly to the heterogeneity of cancer behavior and outcome, and that to translate the wealth of cancer genome characterization into clinical utility requires the functional identification and validation of the underlying driver genes. Driver gene identification will lead to a deeper understanding of cancer genotypes, create an important new set of biomarkers and therapeutic targets, and when combined with genome-wide RNAi screens, lead to the identification of key genetic vulnerabilities that will serve as a new generation of therapeutic targets. Our planned center is a natural expansion of long- standing collaborative projects at Cold Spring Harbor Laboratory (CSHL) and combines several powerful methods that we have developed and will continue to build upon as outlined in this application. These methods include flexible mouse models based on the transplantation of genetically-manipulated progenitor cells into the appropriate tissues of recipient mice; novel bioinformatics that take complex cancer genome datasets and pinpoint candidate driver genes and considerably altered pathways; new RNAi technology to manipulate the expression of candidate target genes in vitro and in vivo; and genome-wide RNAi screens to find genetic vulnerabilities of cancer cells. The CSHL MTDDC will use these innovative tools to place the complex array of genomic alterations identified by cancer genome projects into biologic context. High-throughput screening in mouse models will be used to determine whether candidate genes are drivers or passengers. Additionally, through the identification of those driver genes that are required for tumor maintenance and by genome-wide RNAi screens to find the druggable vulnerabilities of major cancer genotypes, we will discover and validate a new generation of cancer drug targets. The resultant data, reagents, and newly validated biomarkers and targets will be openly shared among the TCGA network and broader cancer research communities, as we have done with RNAi Codex, CSHL’s open-access portal/database for short-hairpin RNA (shRNA) gene-silencing constructs. Supporting New Faculty Recruitment Through Biomedical Research Core Center University of Kentucky 109 Kinkead Hall Lexington, KY 40506-0057 The strategic plan of the University of Kentucky (UK) P30 application in response to RFA-OD- 09-005 is to provide support for the recruitment of 2 junior investigators who will be immersed into a highly collaborative, interdisciplinary group of investigators focused on the diagnosis, prevention and treatment of gastrointestinal (Gl) cancers. This productive group consists of basic and clinical scientists, including molecular and cell biologists, clinician-scientists (surgeons, gastroenterologists, and medical oncologists), Gl pathologists, epidemiologists, biostatisticians and investigators in the School of Pharmacy with successful programs in drug design and delivery. The purpose of this program is to support promising junior investigators who will participate in translational Gl cancer research projects as part of our recently-funded P20 program (P20 CA127004) which provides support for the development of a fully-funded P50 Gl cancer SPORE application. Our goal is to develop a cadre of future Gl cancer investigators who can participate at the intersection of molecular biology, drug discovery and clinical care to become leaders in integrative and team approaches to understand the complex issues of Gl cancer as it relates to potential prevention and treatment strategies. This proposal builds upon the momentum and existing strengths at the Markey Cancer Center and is further supported by substantial institutional, state and philanthropic support. Targeting PTEN Null Tumors via Inhibition of the p110beta Isoform of PI3 Kinase Dana-Farber Cancer Institute 44 Binney St Boston, MA 02115 The class IA phosphatidylinositol 3 kinase (PI3K) signaling axis is perhaps the most frequently activated pathway in human cancer. In response to the activation of receptor tyrosine kinases (RTKs), G-protein coupled receptors (GPCRs) or Ras, class IA PI3Ks, consisting of three catalytic isoforms termed p110?, p110? and p110?, are activated to generate the primary intracellular lipid signal, phosphatidylinositol 3,4,5-trisphosphate (PIP3), which is essential for multiple cellular processes. The tumor suppressor PTEN, a lipid phosphatase, dephosphorylates PIP3, thereby antagonizing the actions of PI3K and regulating the PI3K pathway activity. Pathway activation in tumors is most commonly achieved through activating mutations in p110? isoform or via loss of the PTEN tumor suppressor. Importantly, PI3K enzymes are highly suited for pharmacological intervention, making them attractive targets for cancer therapy. In fact, there are a number of PI3K inhibitors from major pharmaceutical companies that have entered clinical trials for cancer treatment, but most of these inhibitors target all p110 isoforms, which may cause side effects arising from the essential roles of PI3K in normal physiology. While isoform specific inhibitors are being further developed, most of which are directed toward p110? (for solid tumors) or p110? (Hematological malignancies). We believe that the drug companies have blundered by failing to develop p110?- specific inhibitors. We and others have recently demonstrated that tumors driven by PTEN loss are specifically dependent of p110? not p110?. The broad goal of this project is to generate p110? -specific inhibitors for use as new, targeted therapeutics in diverse cancers featuring PTEN mutations. To this end we have assembled a team of scientists optimized to achieve this goal. Our team’s unique reagents for assessing PI3K signaling, coupled with and our expertise in protein chemistry, X-ray crystallography, medicinal chemistry and animal models, position us to effectively develop p110? inhibitors over a two-year time period for future clinical trials. Our specific goals are to generate cell-based systems and genetic models to determine the role of p110? in tumorigenesis driven by PTEN in different tissue types and to test p110? - specific inhibitors, to purify large amounts of active p110? for enzyme assays and crystallography and to pursue a chemistry campaign to design and evaluate new scaffolds for p110? inhibition and optimize 2 of these scaffolds using both cell and animal models and structural information from a complex of p110? and an inhibitor. Role of TIEG1 in Foxp3+Treg development and tumor progression Wayne State University Sponsored Program Administration Detroit, MI 48202 Although tumor vaccines can induce CD4 helper and CD8 cytotoxic response against tumor antigens, they have been largely ineffective in causing tumor regression in the clinic. This is because the tumor cells acquire many mechanisms to evade the immune surveillance program of the host. Foxp3+CD4 +CD25+Treg-mediated immune suppression has emerged as one of the crucial tumor immune evasion mechanisms and main obstacle of successful tumor immunotherapy. Most malignant cells including prostate cancer cells secret large amounts of TGF-¿ and has been shown to convert the effector T cells into tumor antigen specific Tregs by inducing Foxp3 expression. Such tumor induced Tregs not only suppress the priming and effector function of anti-tumor effector cells but also form a broad network of self-amplifying immunosuppressive network. Therefore, overcoming tumor induced expansion and de novo generation of Tregs is critically important for the design of effective immunotherapeutic strategies for successful cancer treatment. We have demonstrated a critical role of TGF-¿ inducible early gene-1 (TIEG1) in the transcriptional regulation of Foxp3 in CD4T cells treated with TGF-¿. E3 ligase Itch-mediated monoubiquitination is essential for nuclear translocation, and transcriptional activation of TIEG1. However, in transient overexpression systems Itch targets TIEG1 for both mono and polyubiquitination. Our preliminary studies suggest that IL-6 which inhibits TGF-¿ induced Foxp3 expression induces proteasomal degradation of TIEG1 possibly through polyubiquitination. Tyk2-mediated phosphorylation of TIEG1 seems to act as a recognition signal for polyubiquitination of TIEG1. Therefore, we hypothesize that Itch targets TIEG1 differentially for mono and polyubiquitination when the CD4T cells are stimulated with TGF-¿ or IL-6 and regulates its activation and degradation. Despite the growing body of data on the role of Foxp3 in Treg development and function, how Foxp3 transcription is regulated is not clear. We have identified consensus NFAT and TIEG1 binding sites adjacent to each other on Foxp3 promoter. Since, most transcription factors work cooperatively with other factors binding in close proximity we hypothesize that NFAT and TIEG1 interact on Foxp3 promoter and regulate chromatin remodeling and Foxp3 expression. A clear understanding of molecular combinations and cross-talks that imprint Foxp3 transcription in CD4T cells will aid in designing strategies to disrupt the inhibitory network of Tregs in tumor microenvironment. Using prostate cancer TRAMP-C2 cells which secrete large amount of TGF-¿, we will analyze the effect of TIEG1 deficiency on Treg development and tumor progression. Since TIEG1 does not effect nTreg development in the thymus, targeting TIEG1 is an appealing strategy to block the de novo induction of Tregs. Such a strategy is expected to eliminate most potent tumor specific Tregs that inhibit anti-tumor immune response without the risk of triggering autoimmunity. Cell Polarity in Self-renewal and Differentiation of Stem/Progenitor Cells Fred Hutchinson Cancer Research Center Box 19024 1100 Fairview Ave N Seattle, WA 98109-1024 Self-renewal and differentiation are fundamental characteristics of all stem/progenitor cells. During mammalian development stem/progenitor cells use cell polarity mechanisms to divide asymmetrically to renew themselves and generate daughters that stop proliferation and differentiate. Similar mechanisms are used for self-renewal and differentiation of adult stem cells. Failure of asymmetric cell divisions in stem cells may result in inability to withdraw from cell cycle, perturbations of normal brain development and cancer. Alternatively, failure of stem cell self-renewal can cause depletion of stem cells, decline in tissue regenerative potential and premature aging. The molecular mechanisms governing cell polarity and asymmetric cell divisions of mammalian stem/ progenitor cells and their role in aging and cancer are still poorly understood. This proposal focuses on cell polarity proteins, Lethal giant larvae 1 and 2 (Lgl1 and Lgl2), which represent the mammalian orthologs of Drosophila neoplastic tumor-suppressor protein Lgl. We have evidence that Lgl1 is necessary for regulation of asymmetric cell division of neural progenitor cells during early neurogenesis and loss of Lgl1 results in abnormal accumulation of progenitors that fail to withdraw from the cell cycle. Neonatal death of Lgl1-/- mice precluded us from the analysis of potential tumor- suppressor role of Lgls in adult animals and their role in self-renewal of adult stem cells. In this proposal we will use a variety of conditional gene knockout and biochemical approaches to investigate the potential in vivo role and significance of the entire Lgl gene family and molecular mechanisms responsible for function of Lgl proteins in regulation of stem/progenitor cell self-renewal and differentiation. These studies will help to extend our knowledge of the mechanisms of self- renewal and differentiation of mammalian stem/progenitor cells. This information will be useful for future development of efficient regenerative, anti-aging and anticancer therapies. Human CYP2A and respiratory tract xenobiotic toxicity Wadsworth Center Health Research, Inc. Menands, NY 12204-2719 The long-term objective is to determine the role of respiratory tract cytochrome P450 (P450 or CYP) enzymes in target tissue metabolic activation and toxicity of environmental chemicals. Our focus continues to be on CYP2A13, an enzyme selectively expressed in human respiratory tract, and the most efficient human P450 enzyme in the metabolic activation of 4- (methylnitrosamino)-1-(3-pyridyl)-1-butanone (NNK), a major tobacco- derived respiratory tract procarcinogen. CYP2A13 is also known to metabolize numerous other important respiratory tract toxicants. Our hypothesis, that CYP2A13 plays an important role in tobacco-related lung carcinogenesis in humans, is supported by findings of a recent epidemiological study, and by reports confirming that CYP2A13 protein is expressed in human lung, where it is active in the metabolic activation of NNK, and that P450s in the lung, but not those in the liver, are essential for NNK-induced lung tumorigenesis in mouse models. Furthermore, our preliminary finding, that expression of CYP2A13 is downregulated by inflammation, offers an explanation for why the levels of CYP2A13 protein detected in patient-derived lung biopsy samples were so low, and suggests the possibility that CYP2A13 levels in intact, healthy lungs are much higher. Here, we propose three series of experiments to overcome the difficulties associated with not being able to directly study P450 expression or activity in intact, healthy human lungs. We will (1) study a CYP2A13-humanized mouse model, in order to provide proof-of-principle for the potential of CYP2A13 to mediate NNK-induced lung tumorigenesis in humans; (2) perform additional studies to better understand the nature and scope of inflammationinduced suppression of CYP gene expression in the lung; and (3) identify common CYP2A13 genetic variants that cause changes in gene expression (and the underlying mechanisms), in order to provide biological basis for future epidemiological studies aimed at further confirming the role of CYP2A13 in smoking-induced lung cancer or other chemical toxicities in various ethnic or occupational groups. We believe that our proposed studies are novel, and the anticipated outcome will be highly relevant to mechanisms of chemical carcinogenesis and other chemical toxicities in human lung. Sentinel Node Versus Axillary Dissection in Breast Cancer University of Vermont & St Agric College 85 South Prospect Street Burlington, VT 05405 The long-term objectives of this proposal are to develop and refine methods of breast cancer staging in patients that are substantially less morbid than current methods, yet still provide the same diagnostic and therapeutic benefits. For the past nine years we have partnered with the National Surgical Adjuvant Breast and Bowel cooperative group (NSABP) to conduct a large, multi-center, randomized, Phase III prospective trial that compares sentinel lymph node (SLN) resection to conventional axillary dissection in clinically node-negative breast cancer patients (NSABP trial B-32). During the last grant period we have also launched a multicenter study to investigate whether detection of the presence of bone marrow micrometastases provides enhanced and early prediction for survival of breast cancer patients (NSABP study BP-59). Several tasks have been shared between UVM and NSABP, such as final trial design and protocol development. UVM has had non-overlapping primary responsibility for the following: (1) training and quality control of all aspects of SLN surgery and bone marrow sample procurement; (2) processing and interpretation of SLNs for occult metastases; (3) processing and interpretation of bone marrow samples for disseminated tumor cells; and (4) statistical analysis of the following three relationships: firstly, the relationship of training to surgical outcomes and quality of reported data; secondly, the relationship of occult metastases in SLNs to survival and other patient variables, and thirdly, the relationship of bone marrow micrometastases to survival. During the time period of this proposal the first 6 Aims will be fully completed and the 7th Aim will result in complete specimen accrual and interpretation. The Specific Aims of the current active grant are: Specific Aims #1 and #2: Determine whether SLN resection alone, when compared to ALN dissection plus SLN resection, results in equivalent long-term control of regional disease (Aim 1) and disease-free and overall survival (Aim 2). Aim #3: Determine the magnitude of morbidity reduction of SLN surgery versus ALN resection. Aim #4: Determine the magnitude of quality of life improvement by SLN surgery versus ALN resection. Aim #5: Determine whether standardized immunohistochemistry analysis of hematoxylin and eosin- negative SLNs identifies patients at risk for decreased overall and disease-free survival. Aim #6: Establish a standardized method of SLN surgery in a large number of centers for procedural consistency. Aim #7: Determine the relative risk of death associated with the presence of tumor cells in the bone marrow of breast cancer patients and investigate the relationship between 2 tumor cell detection methods, brightfield and immunoflourescence cytochemistry, in detecting bone marrow micrometastases. p53 Acetylation as a Mechanism in Chemoprevention by Aspirin Texas Tech University Health Scis Center 3601 4th Street - Ms 6271 Lubbock, TX 79430-6271 A vast amount of epidemiological, preclinical and clinical studies have revealed aspirin as a promising chemopreventive agent, particularly in epithelial carcinogenesis. Despite the wide attention inhibition of cyclooxygenases has received, it is clear that aspirin elicits a myriad of molecular effects that counteract the carcinogenic episodes. Since aspirin’s protective effect was mainly observed in epithelial cell types which are more resistant to chemotherapeutic efforts, an urgent need exists to dissect and identify the primary targets and cancer preventive pathways affected by aspirin. In preliminary studies, we have obtained the first and strong evidence for a dose- and time-dependent acetylation of p53 tumor suppressor protein by aspirin in MDA-MB-231 human breast cancer cells, several cancer cells belonging to different tumor types and also in normal liver cells. In MDA-MB-231 cells, aspirin induced the levels of p53 target genes namely p21CIP1, a protein involved in cell cycle arrest, and Bax, a proapoptotic protein; however, p21 induction was transient (1-12h); where as, induction of Bax was sustained (24 h). Interestingly, in DNA damaged cells (induced by camptothecin), aspirin treatment (24 h) inhibited the p21 induction, while the Bax induction was unaffected. Built on these findings, the central hypothesis of this R03 pilot project is that aspirin-induced multi-site acetylation of p53 alters its transcription factor function by shifting the gene expression spectrum from those that elicit cell cycle arrest / prosurvival properties to those that promote and drive cell death. Since deletion of p21 gene has been previously shown to increase the sensitivity of cells towards apoptosis, our observation that aspirin inhibits p21 suggests a potential mechanism by which it may exert anti-cancer effects in DNA damaged cells. The studies proposed in this application will determine the mechanisms by which aspirin regulates apoptosis in DNA damaged cells via inhibition of p21. We will use MDA-MB-231 and MCF-7 breast cancer cells as well as normal human Peripheral Blood Mononuclear Cells in our study. The experiments in Aim 1 will investigate the molecular basis of aspirin-mediated inhibition of p21 using real time RT-PCR, electrophoretic mobility shift assays, and run on transcription assays. We will also identify aspirin-induced acetylation sites on p53. In Aim II, we will determine the ability of aspirin to augment apoptosis in cells exposed to DNA damaging drugs by clonogenic cell survival assays and flow cytometry. In addition to camptothecin, all studies will be extended to include doxorubicin and cisplatin, to determine if aspirin also modulates p21 / Bax expression by these DNA damaging drugs. These studies will provide a novel mechanism by which aspirin may exert anticancer effects in DNA damaged cells via acetylation of p53, induction of Bax and inhibition of p21. The role of pheomelanin in cutaneous melanoma Tufts Medical Center 800 Washington St Boston, MA 02111-1526 Ultraviolet (UV) radiation represents a definitive risk factor for skin cancer, particularly in combination with certain underlying genetic traits, such as red hair and fair skin. Skin pigmentation results from the synthesis of melanin in pigment-producing cells, the melanocytes, followed by distribution and transport of the pigment granules to neighboring keratinocytes. Epidemiological studies have found less skin cancer in people who have high levels of constitutive pigment and/or tan well. However, we have incomplete understanding of other factors involved in the development of skin cancer, such as capacity to repair photo- damage in people of different skin colors. The finding that albinos have a lower incidence of melanoma than people with fair skin makes this question more complex. Recent findings including our own have led to a realization that melanin, especially pheomelanin (a yellow/red form of melanin), acts as a potent UVB photosensitizer to induce DNA damage and cause apoptosis in mouse skin. The proposed research will focus on the role of pheomelanin in DNA damage, at both genomic and individual nucleotide levels, and on the subsequent activation of DNA repair, alteration in chromatin structure, and ultimately melanoma formation. We hypothesize that pheomelanin contributes to UVinduced DNA damage that is incompletely repaired. Although DNA repair may be activated to a larger extent in response to the greater DNA damage in pheomelanin-containing skin, the repair will be insufficient to eliminate all mutagenic adducts. We will first identify the role of pheomelanin in melanoma formation by melanoma mouse models. Second, we will define the photoproducts and oxidative stress to DNA in mice with different type of epidermal pigmentation at different times after UVB irradiation by quantitative methods. Third, we will map DNA damage in specific sequences of BRAF and N-RAS genes, both of which are frequently mutated in human melanoma. Finally, we will detect the expression of genes in DNA repair pathways at different times after UVB irradiation. Given the vital role that pheomelanin plays in normal phototoxicity and disease, these studies will provide important insights into the homeostasis of tanning and the pathogenesis of disorders like melanoma. Expanding our knowledge of DNA repair in different skin types provides a rich ground for melanoma prevention and for the development of targeted small-molecule therapeutics. Cancers in Older Minority Populations: Caribbean American Long Island University Brooklyn Campus 1 University Plz Brooklyn, NY 11201-8423 Strong ties have developed between investigators at Long Island University’s Brooklyn campus (LIU) and Columbia University’s Herbert Irving Comprehensive Cancer Center (HICCC) over the past four years. Several joint grants and projects have resulted, totaling over $2 million, with two proposals pending. Anchoring this collaboration has been a P20 from NCI (CA91372). These research efforts have focused on differences among African Caribbean immigrant populations in Brooklyn and North Manhattan (including Dominican, Haitian, and English-speaking Caribbeans) and US-born African Americans and European Americans. The research includes behavioral, cultural, lifestyle, and biological genetic differences that may relate to cancer-related health disparities. In this application, we propose to build upon the existing partnership and use it as a platform for a broader, more comprehensive study of the same issues. This partnership would meld the two institutions, with an emphasis on bringing together their complementary strengths. The PI at LIU is a well-known psychologist with extensive behavioral and survey research experience with these populations in Brooklyn. The PI at Columbia is a medical oncologist and epidemiologist with a strong record in cancer prevention and control research and a leadership position in the MCCC. HICCC will provide access to its core facilities, especially the Biostatistics Core, while LlU will provide its expertise in survey and behavioral research. The proximity of the two institutions will permit frequent seminars and workshops attended by individuals from both centers, as well as an annual retreat at each center. Students and faculty at each will also have access to courses and lectures at each of the institutions. Equally important will be programs designed to provide experience for minority students and faculty in cancer research, with the opportunity for students from LIU to obtain admissions and fellowships to Columbia programs, illustrated by a minority predoc from LIU who will have a T32 postdoc at Columbia. Two projects and four pilots are U54 program. There will be an annual competition for funding for the following year; proposals will be reviewed by external reviewers, as was successfully conducted in our P20. Ongoing/proposed projects/pilots will be I discussed at a monthly workshop alternating between campuses at which statisticians, data management, and methodologists will attend to provide constructive discussion. A representative from the University of West Indies (UWI) will attend annual EAB meetings and, via videoconference, quarterly internal steering committee meetings with long-term possibilities for dual site (Brooklyn/Caribbean) projects. This partnership has a unique study population, a successful existing relationship, and an emphasis on population science research. Case Comprehensive Cancer Center Support Grant Case Western Reserve University 10900 Euclid Ave Cleveland, OH 44106-7015 The Case Comprehensive Cancer Center (Case CCC), now in its 17 year, provides leadership and oversight for basic cancer research, therapeutic and non-therapeutic cancer clinical trials, prevention, control and population research, as well as community outreach for the major affiliate institutions of Case Western Reserve University: Case School of Medicine, University Hospitals of Cleveland, and the Cleveland Clinic. Located in Cleveland and serving the 3.8 million people in Northern Ohio, members of the Cancer Center manage over 7,000 new cases per year with a high rate of clinical trial access and accrual, operating under a single protocol development and review system, data safety management plan, and a coordinated clinical trials operation. Since the last competitive renewal application, the Center has increased NCI funding by more than 53%, and more than doubled its total peer- reviewed funding. The Center also accrued 877 patients to therapeutic clinical trials in 2005. Significant institutional commitment to Center development resources, faculty recruitment, shared resources, and space assures the Center’s continued success and its dynamic approach to multidisciplinary cancer research and therapeutics. The Case CCC has 9 Scientific Programs, 17 shared resources including 6 that are new, and a clinical and behavioral cancer research infrastructure that prioritizes innovative translational research and investigator-initiated clinical trials that cut across the Scientific Programs. These programs include Cancer Genetics, Cell Proliferation and Cell Death, Radiation and Cellular Stress Response, Molecular Mechanisms of Oncogenesis, GU Malignancies (new), Stem Cells and Hematologic Malignancies, Developmental Therapeutics, Cancer Prevention, Control and Population Research, and Aging-Cancer Research (new). The new Shared Resources include Imaging Research, Proteomics, Hybridoma, Transgenic & Targeting, Translational Research, and Practice-Based Research Network. Each of these new Shared Resources is fully operational, supporting the cancer research of multiple members across programs, and providing a critical platform for multidisciplinary and interdisciplinary research. This scientific organization and infrastructure furthers the mission of the Case CCC: to improve the prevention, diagnosis, and therapy of cancer through discovery, evaluation, and dissemination that together reduce cancer morbidity and mortality in Northern Ohio and the Nation. Spin Probes in Semipermeable Nanospheres: EPR spectroscopy & imaging of tumor pH Ohio State University 1960 Kenny Road Columbus, OH 43210 The overall goal of this project is to develop new functional EPR probes of enhanced stability for in vivo EPR spectroscopy and imaging of pH, one of the most important parameters in the biochemistry of living organisms. pH-sensitive nitroxyl radicals have been previously developed by the P.I. and colleagues but often suffer insufficient stability in living tissues. In this project two different strategies will be used to develop paramagnetic probes with stability in vivo based on the original idea of constructing nano-Sized Particles with the Incorporated Nitroxides, or nanoSPINs. The semipermeable membrane of the nanoSPINs will differentiate sensing nitroxides from biological reductants while allowing free penetration of the analyze, H+. This will fill a niche between fluorescent pH probes, which have provided advances in applications for cellular and subcellular detection, and NMR/MRI, which have provided applications in living animals and humans, but these current techniques often suffer from the lack of sensitivity (1000 fold or lower than EPR) and specificity. The specific aims are: (SA1) To develop effective approaches for the design of pH-sensitive nanoSPINs. Two alternative strategies for the incorporation of the nitroxides into semipermeable nanospheres will be used, namely incorporation into phospholipids liposomes and polyamide capsules. (SA2) To define spectroscopic and physicochemical characteristics of pH- sensitive nanoSPINs. Quantitative characterization of the obtained nanoSPIN is absolutely crucial both for the optimization of the preparation procedures and for efficiency of their further applications. (SA3) To apply in vivo EPR measurements of pHe in PyMT tumor-bearing mice using developed nanoSPINs. The measurement of the extracellular pHe in the PyMT mammary tumors in living mice using developed pHsensitive nanoSPINs will provide new insights into related biochemical processes, including better understanding of the observed anti-tumor activity of Granulocyte Macrophage Colony-Stimulating Factor (GM-CSF), a therapeutic approach which is currently of much interest. The results may provide an opportunity for the design of other corresponding therapeutic approaches. In summary, the success of this project may have a significant impact on the future of functional in vivo EPR spectroscopy and bioimaging applications to medicine. This project will develop pH-sensitive paramagnetic probes of enhanced stability based on encapsulation of the nitroxides into semipermeable nanospheres. These probes, termed nanoSPINs, will allow in vivo EPR spectroscopy and imaging of pH. The experiments using pHsensitive nanoSPINs in PyMT mammary tumors in living mice are planned to contribute to the understanding of the mechanisms of extracellular acidosis in solid tumors and to use extracellular pH to monitor tumor progression and thus evaluate the efficacy of anti-tumor drugs, and provide opportunities for designing corresponding therapeutic approaches. Development of Oncomine Professional as a Platform for Biopharmaceutical Research Compendia Bioscience, Inc. Floor 2 Ann Arbor, MI 48104 DNA microarray studies, largely sponsored by the NIH and other granting agencies, have generated a wealth of data uncovering the complex gene expression patterns of cancer. Currently however, there is no unifying organizational or bioinformatics resource to integrate the myriad independent observations into a single, global, computable environment. Such a resource would not only provide wide access to data from individual studies, but would also provide an opportunity to apply advanced analysis techniques to the aggregated data. In the absence of such a resource, the majority of cancer molecular profiling data remains severely under-utilized by both the academic cancer research community, and by the pharmaceutical and biotechnology companies who could utilize this data to aid in their efforts to develop new biomarkers and therapies. We propose to develop a commercialscale solution for cancer molecular profiling research to address this problem. The solution builds upon the prototype of Oncomine developed at the University of Michigan, which utilizes a data pipeline, a data warehouse, an analysis engine, and a web interface to deliver human cancer genomic data in an intuitive platform to scientists and clinicians. The specific aims in Phase I of this proposal are to: 1. Modify the Academic Data Pipeline to Support Commercial Operations. 2. Re-host and re-structure the Oncomine Database. 3. Develop a commercial technical operating model for the Oncomine Web Application. In Phase II of this proposal we will: 1. Develop a controlled cancer genomics data pipeline to support the rapid and proactive collection, standardization and analysis of heterogeneous cancer genomics data from repositories, academic laboratories and pharmaceutical companies. 2. Develop a scalable and secure cancer genomics data warehouse to support the storage and retrieval of public and proprietary data. 3. Develop an optimized user interface to support cancer drug discovery and development. The result of this work will be a fully integrated, end-to-end platform for providing publicly funded research results to the commercial sector, with a goal of utilizing that data to develop new diagnostic and therapeutic approaches for treating cancer. The Oncomine prototype is already broadly accepted in academia, and has been verified as a research tool with high utility by over 10,000+ non-profit users. Since 2006 Compendia has worked to establish the commercial merit of Oncomine; as a result, tens of thousands of valuable high-throughput experiments are now being utilized by several of the world’s top pharmaceutical companies. However, additional funding is required to transition Oncomine from an academic tool to a commercial platform, and to realize the full commercial potential of this approach to advance research and save lives.Cancer is a leading cause of mortality, and is responsible for one in every four deaths in the United States. In recent years global gene expression technologies have generated important new information about the molecular mechanisms underlying cancer by revealing specific aberrations in genes, proteins, and signaling pathways. This proposal seeks funding to provide a platform for aggregating, analyzing, and presenting this genomic data to drug development companies, with a goal of optimizing the clinical usefulness of cancer genomic data for drug discovery and development. Pharmacogenomics of Childhood Leukemia (ALL) St. Jude Children’s Research Hospital Memphis, TN 38105 Despite substantial progress in the past two decades, cancer remains the leading cause of death by disease in US children between 1 and 15 years of age. Acute lymphoblastic leukemia (ALL) is the most common childhood cancer, and cure rates are approaching approximately 80% today. Unfortunately, 20% of children with ALL are not cured with current therapy, making the number of cases of relapsed ALL greater than the total number of new cases of most childhood cancers. Previous work has established that de novo drug resistance is a primary cause of treatment failure in childhood ALL. However, the genomic determinants of such resistance remain poorly defined. We have recently identified a number of new genes that are expressed at a significantly different level in B-lineage ALL cells exhibiting de novo resistance to widely used antileukemic agents (prednisolone, vincristine, asparaginase, daunorubicin), and their pattern of expression was also significantly related to treatment outcome. To assess, three research aims that extend our prior findings. The first scientific aim is to identify genes conferring de novo resistance of childhood ALL to the widely used thiopurines, mercaptopurine and thioguanine. This will be the first genome-wide analysis of genes conferring thiopurine resistance and will provide important new insights into whether they represent distinct antileukemic agents. The second aim is to identify genes in T-ALL that confer de novo resistance to the four agents we have previously studied in B-lineage ALL (prednisolone, vincristine, asparaginase, daunorubicin) and the two thiopurines. This will yield pharmacogenomic insights into why T-ALL has a worse prognosis with most treatment protocols. The final aim is to identify germline polymorphisms or epigenetic changes in the promoter regions of those genes that are differentially expressed in ALL cells exhibiting resistance to these antileukemic agents. Preliminary studies have already identified a significant relation between mRNA expression in ALL cells and the promoter haplotype structure of the first gene investigated (SMARCB1). It is important to extend these pharmacogenomic studies in a systematic way to additional genes conferring de novo drug resistance. These findings will continue to provide important new insights into the genomic determinants of treatment failure and point to novel targets for developing strategies to overcome drug resistance in childhood ALL. University of New Mexico Cancer Center Support University of New Mexico Main Campus, PreAward Albuquerque, NM 87131 The Cancer Epidemiology and Prevention Program was formally established in 1973 when the New Mexico Tumor Registry joined with 6 other population-based tumor registries to form the NCI-SEER (Surveillance, Epidemiology and End Results) program. The NM SEER data provide the fundamental, population-based data for hypothesis generation in this program, and has led to a strong base of funding for research in lung, breast, skin and Gl cancers. The striking differences in cancer patterns, in cancer health disparities, and in outcomes among New Mexico’s multiethnic population are under intense investigation to uncover the genetic, environmental, social, and behavioral factors that account for these patterns and disparities. In addition, the program’s community-based research and outreach in cancer education, screening, and prevention among rural, American Indian and Hispanic populations work toward correcting those disparities. Led by co-directors Marianne Berwick and Steven Belinsky, the Cancer Epidemiology and Prevention Program joins 23 full members, 2 members with secondary appointments, and 5 associate members with primary appointments in 5 Departments within the UNM School of Medicine and College of Pharmacy, the Lovelace Respiratory Research Institute, and the Albuquerque Veteran’s Administration Medical Center. The Cancer Epidemiology and Prevention Program has four major scientific goals that cross the organbased themes of lung, breast, skin and gastrointestinal cancers: (1) To identify the genetic, epigenetic, environmental and behavioral risk factors contributing to the development and progression of cancer, particularly those cancers that disproportionately affect New Mexico’s multiethnic populations; (2) To develop biomarkers for the risk factors identified in aim 1; (3) To develop interventions for cancer prevention that target specific biochemical pathways and factors identified in aim 1, that will be assessed using biomarkers from aim 2; and (4) To translate these interventions into community prevention, outreach, and education programs using community-based participatory research methods. The high quality of the interactive research in this Program has resulted in a large number of peer-reviewed grants and collaborative publications. The Program is supported by $10,374,531 in peer- reviewed funds (annual direct costs) from NCI, other NIH, DOD and CDC. Of this, $4,225,820 (41%) is NCI funding (exclusive of SEER funding). Program Members published 263 cancer-relevant, peer-reviewed articles between 2000 and 2005; 16% of those represent intra-programmatic collaboration and 4% inter-programmatic collaboration. Program members serve in national leadership roles in multiple cooperative group initiatives and in NIH review panels. The large number of collaborative publications, the success at obtaining peer-reviewed funding, and the national leadership roles played by Program members document the excellence of the interactive efforts of this Program. Major programmatic research accomplishments include: the identification of epigenetic events, critical to the risk and progression in lung cancer; the identification of disparate risk for breast cancer prognostic markers between Hispanic and non-Hispanic white women; and the demonstration of a protective role for sun exposure in melanoma survival that may be due to the metabolism of Vitamin D. These findings set new directions for research into the fundamental biology of these cancers and will help direct the establishment of biomarkers to identify high-risk individuals for intervention. Supporting New Faculty Recruitment Through BioMedical Research Core Center National Heart, Lung and Blood Institute University of Minnesota Twin Cities 450 McNamara Alumni Center Minneapolis, MN 55455-2070 The University of Minnesota Pulmonary, Allergy, Critical Care & Sleep (PACCS) Division has been systematically recruiting additional physician scientists focused on lung injury and repair to join the tenure track faculty. Our faculty met and identified a significant gap in our faculty research interest profile in this area - respiratory infections. This area is not only of great importance as a public health issue in the US and worldwide, it is also an area with outstanding multi-disciplinary, collaborative scientific opportunities at the University of Minnesota. Within the PACCS Division, among our NIH-funded PIs, there are 7 faculty with expertise in lung inflammation and injury. In addition, there are three academically strong Centers and programs pertinent to our proposed recruit, providing a dynamic research environment to promote scientific growth and career development. The Center for Infectious Disease, Microbiology & Translational Research brings together faculty from the Medicine, Pediatrics and Microbiology Departments in interdisciplinary translational research on microbial pathogenesis. The Center for Lung Science and Health provides a home for faculty and students from across the Academic Health Center and larger University with interests related to lung health and disease. Finally, the University of Minnesota has an internationally renowned Cystic Fibrosis program. While this program is outstanding in clinical care and clinical trials activity, the basic research component is less strong. Thus a major recruitment target area of the PACCS Division is for a physician-scientist with research focused on respiratory infections, particularly with relevance to lung injury in Cystic Fibrosis. Our proposed P30 recruit, Bryan Williams MD, PhD is completing his fourth year of Pulmonary, Critical Care & CF fellowship at Vanderbilt University. His research focus is on host-pathogen interactions in respiratory infections, specifically exploring the role of a polyamine precursor, agmatine, that is important in Pseudomonas infections and in biofilm formation. He obtained his Microbiology PhD under the mentorship of Dr. Arnie Smith studying Hemophilus infections and his post-doctoral fellowship research has been supervised by Dr. Timothy Blackwell. Dr Williams’ research relates directly to his clinical interest in CF-related lung dise disease, enabling convergence of his research and clinical program. The recruitment of Bryan Willliams MD, PhD will add the new dimension of expertise in respiratory infections to the PACCS Divisional research It will greatly augment basic research in the Cystic Fibrosis Center program and will provide a research bridge between the Center for Lung Science and Health and the Center for Infectious Disease, Microbiology Translational Research. Dr Williams’s research brings an innovative approach to understanding and decreasing Pseudomonas infection in CF patients. Cell Based Therapy for Lung Disease National Heart, Lung and Blood Institute National Jewish Health 1400 Jackson Street Denver, CO 80206 We propose to build a new paradigm for advancing and transforming patient care through development of cell-based therapies for human lung disease. Analysis of acute lung injury in mice indicates that epithelial damage can prestage loss of alveolar structure and function. These data support the hypothesis that cell-based therapy focused on replacement of the damaged epithelium can ameliorate morbidity and mortality associated with high risk diagnosis and progression to acute lung injury. Our analysis of lung epithelial stem and facultative progenitor cells suggests that the latter cell type exhibits optimal characteristics for replacement of injured epithelial cells as well as restoration of critical homeostatic functions. Based on these studies we propose to use competitive repopulation to test the hypothesis that facultative progenitor cells can repopulate the injured airway or alveolar epithelium in the context of acute lung injury. These hypotheses will be tested using functionally distinct populations of human lung facultative progenitor cells, basal and the alveolar type II cells. These cell types are known to maintain and regenerate the normal bronchial and alveolar epithelial compartments. Acute and progressive aspects of acute lung injury will be represented using a novel mouse model that recapitulates the morbidity and mortality of acute lung injury on post-treatment days 5 and 10. Previously developed cell isolation methods and this unique mouse model will be combined to determine: (1) the characteristics of the most promising target patient population for cell-based therapy; (2) the best cell type for treatment of early and late acute lung injury; and (3) preclinical parameters including optimal route, dose, and timing of treatment. Successful completion of this study will propel the field of cell replacement therapy for lung disease beyond the planning stage and into a position appropriate for initiation of clinical trials. The limitations of previous analyses will be overcome through implementation of an appropriately powered analysis of intersections between time, cell type, route, and dose. Trials for refinement of the treatment protocol and evaluation of consistency among donor cell populations are advanced components of the study design. Outcomes will be evaluated through quantitative measurements that are germane to pulmonary function. This novel intervention strategy has the potential to ameliorate morbidity and mortality in the almost 200,000 American citizens that suffer from acute lung injury associated with trauma, aspiration, or infection each year. Among these patients there are nearly 75,000 deaths per year. This benefit will be achieved through development of new treatment strategy and through facilitation of research focused on engineering approaches to lung regeneration or replacement. Thus, focused evaluation of the fundamental parameters highlighted in this pre-clinical trial will advance the emerging field of cell based therapy and regenerative medicine approaches to treatment of acute lung injury. Development of an Asthma Research Core Center National Heart, Lung and Blood Institute Children’s Hospital Med Ctr (Cincinnati) 3333 Burnet Ave Cincinnati, OH 45229-3039 Asthma, a chronic inflammatory disorder of the airways, is estimated by the World Health Organization to affect 150 million people worldwide and its global pharmacotherapeutic costs exceed $5 billion per year. Cincinnati Children’s Hospital Medical Center (CCHMC) provides clinical care to -7000 asthmatic children in the primary care and specialty clinics. Last year, over 3000 children were treated in the CCHMC Emergency Department with the primary diagnosis of an acute asthma exacerbation, and 885 patients (29.5%) were admitted to the hospital for management of acute asthma exacerbations. CCHMC has invested considerable resources to promote asthma research including the establishment of the Division of Asthma Research, which has partnered with the Asthma Center to create a comprehensive Asthma Program, which now provides a central base for the clinical and research activities for asthma at CCHMC. Patients suffering from asthma share similar clinical symptoms, but the disease is heterogenous in terms of phenotypes and natural history 3, 4. This heterogeneity contributes to the difficulty in both studying and treating asthma. The heterogeneity in asthma is poorly understood and the mechanisms by which genetic and environmental influences impact asthma development and asthma disease expression are largely unknown. As such, the proposed Asthma Research Core has the central goal of improving the understanding of the heterogeneity in asthma. In order to accomplish this goal, we propose 2 aims: Aim #1: To recruit or promote a new faculty member into the tenure track to develop a research program focused a topic relevant to elucidating the mechanisms contributing to asthma heterogeneity. Aim #2: To develop a pilot research program in Asthma Research to support new faculty in the tenure track in the areas outlined above. The frequency of absent or incomplete efficacy in asthma treatment is as high as 70%, due to the inherent heterogeneity in asthma phenotypes caused by multiple genetic and environmental influences. The central goal of this proposal is to improve the understanding of the heterogeneity in asthma. Improved understanding of asthma phenotypes will enable informed personalized treatment plans and likely will result in substantial reduction in asthma expenditures. Genome-Wide Association and Exon Sequencing Study in IPF National Heart, Lung and Blood Institute University of Chicago 5801 S Ellis Ave Chicago, IL 60637 Idiopathic Pulmonary fibrosis (IPF) is a progressive untreatable lung disease. IPF has eluded causal genetic determinants that may provide targets for novel therapeutic approaches. The objective of this proposed research is to identify causal genetic variants contributing to risk of IPF using a Genome-wide association studies (GWAS) panel in a large complied cohort. Each DNA sample is accompanied by detailed phenotypic data. To meet this objective we have the following specific aims: Specific Aim 1. To establish a combined cohort of over 700 IPF patients and perform a Genome Wide Association Study (GWAS) in 450 subjects with IPF. The hypothesis to be tested is that inheritable genetic factors affect individual susceptibility of IPF. To accomplish this we will establish clinically meaningful definitions for disease phenotypes in a merged manually and electronically curated database of all 700 collaborator patient sample sets of IPF patients and then perform a complete a GWAS using Affymetrix SNP 6.0 GeneChip(R) in 450 IPF patients and deposit the GWAS genotype and phenotype data in the NIH repository in dbGap. Specific Aim 2. Conduct both standard and novel analyses in genetic variation by phenotypes severity and rapidity of progression. The hypothesis to be tested is that inheritable genetic factors influence prognosis and severity of the disease. To accomplish this we will determine SNPs associated with IPF utilizing publicly deposited genotyped control GWAS data and evaluate copy number polymorphisms via available probes, and test for association with IPF phenotypes and determine if the associated variants differ in frequency between subjects with “rapidly progressive” IPF with high mortality versus those with “slow” IPF, severity grade or other clinical outcome measures Specific Aim 3. Perform Exon-Wide targeted DNA sequencing and genotyping to validate the GWAS associated genetic variants and to discover functional variations in Caucasians and African Americans with IPF. The hypothesis to be tested is that Exon-Wide sequencing of subjects with different ethnic and racial backgrounds and severity cohorts will allow the identification of causal/ functional variants associated with IPF. We will replicate the most significant associations with a selective SNP array in a replicate IPF patient cohort of 200 subjects and then perform Exon-wide sequencing of 48 genes in 160 Caucasian and African American subjects using Illumina 454 technology and conduct a statistical analysis of exon variants discovered in sub- aim b. We expect that completion of a genome-wide association study using clinically meaningful phenotypes coupled to exon-wide re-sequencing will lead to identification of the genes and the specific genetic variants that contribute to the development of IPF. This can then be used as guide to lead to new approaches for preventing and treating this deadly disease. New Faculty Recruitment to Enhance Resources in Hypertension Research National Heart, Lung and Blood Institute Tulane University Of Louisiana 6823 St Charles Ave New Orleans, LA 70118 The mission of the Tulane Hypertension and Renal Center of Excellence (THRCE) is to stimulate research activities related to cardiovascular, kidney, and hypertension related diseases and is a multidisciplinary Center with members from clinical and basic science departments. This application proposes to augment and expand biomedical research efforts in the area of cardiovascular and hypertension related diseases by hiring one newly independent investigator (Nil) and providing a start-up package and all resources and support needed for the Nil to develop a competitive research program. We propose to appoint Romer A. Gonzalez-Villalobos, MD, PhD, a postdoctoral fellow, as a tenure track assistant professor in the Department of Physiology. With this plan, the center seeks to provide the new faculty with an enriched environment, and enhance the center’s research resources by creating a new core for cardiovascular and renal mouse phenotyping. In this regard Dr. Gonzalez is uniquely qualified to perform phenotyping studies in mice by virtue of his academic background, experience and technical training. For the pilot project Dr. Gonzalez- Villalobos has formulated the hypothesis that during Ang ll-induced hypertension, intrarenal ACE-derived Ang II formation is required in order to augment Ang II levels in the kidney that in turn increase sodium and water retention, increase miR-21 expression, and lead to the progressive development of high blood pressure and renal injury. Experiments will be performed in tissue-specific ACE knockout mice in order to address this hypothesis. The plan for fostering and monitoring the NIl includes providing the candidate with the requisite infrastructure, equipment and technical support; establishing an atmosphere conducive to a strong collaborative network; providing a forum for critical evaluation of experimental design, results, papers and grant proposals; and encouraging the candidate’s attendance and participation in national and international meetings as well as involvement in scientific societies and active pursuit of funding. The proposed plan will provide the means to develop and support the new faculty in his quest to improve our understanding of the mechanisms participating in angiotensin II synthesis in the kidneys and its role in the development of hypertension. This is important because angiotensin II is a hormone that plays a major role in the control of renal function, the development of hypertension and kidney damage. Advancing Physical Activity Measurement Using Pattern Recognition Techniques National Heart, Lung and Blood Institute University of Massachusetts Amherst 70 Butterfield Terrace Amherst, MA 01003-9242 In October, 2008 the US Department of Health and Human Services issued the first-ever federally mandated Physical Activity Guidelines for Americans. The Guidelines reflect the view of the Physical Activity Guidelines Advisory Committee (PAGAC) and are based on an extensive review of the scientific literature on physical activity (PA) and health. In their report, the PAGAC points out the limited knowledge of the doseresponse relationship between PA and health, and identifies poor measures of PA exposure as a major contributing factor to this gap in knowledge. Our application directly addresses this issue by applying innovative technologies to measure PA dose in a free- living environment. We will use these technologies to examine if habitual PA performed outside of purposeful exercise influences biomarkers of cardiovascular health. Although insufficient PA clearly correlates with an increased risk for cardiovascular disease (CVD), research evidence is equivocal regarding the effects of training on CVD risk factors (e.g. insulin action, triglycerides, blood pressure, and cholesterol). Research suggests increases in sedentary behavior may negate the benefits of training however this idea has not been explored experimentally. Our application will consider habitual free-living PA as a possible mechanism mediating the relationship between training and risk factors for cardiovascular disease. In order to elucidate the relationship between PA and biomarkers of cardiovascular disease risk, it is critical that valid, objective measures are used to quantify PA. We propose to use novel analytic techniques known as artificial neural networks (ANN) to process accelerometer-based measurements of PA. The first part of this project (Aim 1) will examine the ANN’s sensitivity to change in PA dose by applying the ANN technique to distinguish three distinct patterns of habitual PA - Sedentary, Moderately Active, and Very Active. These three conditions represent common activity patterns that impact health. Accurately assessing changes to habitual PA levels that are relevant to public health will advance the field by further establishing a technique for application in population surveillance research and detection of changes in PA consequent to an intervention. The second part of this project (Aim 2) will apply the ANN methodology to examine the effect of free-living activity and inactivity levels, performed outside of training, on insulin action, blood pressure, triglycerides, cholesterol, and cardiorespiratory fitness following a 12-week exercise training trial in previously sedentary individuals with an elevated risk for CVD. Results from this study have the potential to impact how clinical exercise trials are conducted (e.g. require objective monitoring of PA outside of an exercise training trial) and how exercise is prescribed (e.g. reducing sedentary time AND maintaining sufficient PA). The Physical Activity Guidelines Advisory Committee advocates improved measures of physical activity exposure in order to elucidate the relationship between physical activity dose and health. To address this challenge we will apply and validate innovative accelerometer-based technologies for measuring physical activity to assess its sensitivity to detecting changes in dose of physical activity and to monitor activity outside of a training program designed to improve cardiorespiratory fitness and biomarkers of cardiovascular disease risk. Through improved measures of physical activity this project will promote a better understanding of how the dose of physical activity affects selected health outcomes. ECG-derived cardiopulmonary coupling biomarkers of sleep, sleep-breathing, and ca National Heart, Lung and Blood Institute Beth Israel Deaconess Medical Center Boston, MA 02215 The traditional approach to quantifying sleep and sleep-respiration relies on manual or computer assisted scoring of 30 second epochs, tagging of discrete fast phasic electroencephalographic events as arousals, and thresholds to identify pathological breathing. The scoring rules are usually reliant on a single physiological stream to make a determination, such as arousals from the electroencephalogram. However, arousing stimuli reliably induce simultaneous transient changes in numerous physiological systems - electrocortical, respiratory, autonomic, hemodynamic, and motor. These multiple linked physiological systems seem to show important patterns of coupled activity that current staging / scoring systems do not recognize. The respiratory chemoreflexes track oxygen (O2) and carbon dioxide (CO2) levels in the blood. Disease states can alter the set-point or response slope of the respiratory chemoreflexes, such that they are less (e.g., obesity hypoventilation syndrome) or more (e.g., central sleep apnea) sensitive to O2 and CO2 fluctuations. An ability to quantify and track the respiratory chemoreflexes during sleep could have clinical use, as (1) In certain conditions like congestive heart failure, chemoreflex sensitivity is reliably increased, correlates with disease severity and outcomes, and contributes to the high prevalence of sleep-disordered breathing. (2) Heightened respiratory chemoreflexes may contribute to obstructive sleep apnea severity, be associated with induction of central apneas when continuous positive airway pressure (CPAP) is used for treatment, and possibly impair long term efficacy and tolerance. Patients with obstructive sleep apnea who fail CPAP therapy due to induction of central apneas and periodic breathing (called “complex sleep apnea”) are not otherwise distinguishable from CPAP-responsive patients. A biomarker that can track chemoreflex modulation of sleep respiration will provide a new view of short and long-term dynamic sleep physiology with important clinical implications. The approach proposed here is to analyze coupled sleep oscillations to mathematically extract state characteristics and modulatory influences. The fundamental idea is that mapping common themes encoded within multiple (2 or more) physiologically distinct but biologically linked signal streams (such as electrocortical, autonomic, respiratory and motor) yields evidence of deeper regulatory processes not evident by the current approach of scoring / staging sleep with electroencephalogram or airflow patterns alone. We have developed a method that needs only a single channel electrocardiogram (ECG), is automated, can have parametrically varied detection thresholds, and is readily repeatable. From the ECG, we extract heart rate variability (HRV) and ECG Rwave amplitude fluctuations associated with respiratory tidal volume changes (the ECG-derived respiration, EDR). The next step is to mathematically combine the HRV and EDR to generate the cross-product coherence of cardiopulmonary coupling, which yields the sleep spectrogram. The sleep spectrogram shows high (0.1-1 Hz, low (0.1-0.01) and very low (0.01-0 Hz) coupling spectra that show spontaneous shifts between states in health and disease. High frequency coupling (HFC) is the biomarker of stable and physiologically restful sleep, low frequency coupling (LFC) is unstable or physiologically aroused sleep, and very low frequency coupling (VLFC) is wake or REM sleep. Health is dominated by HFC, diseases such as sleep apnea by LFC. A subset of LFC that correlate with apneas and hypopneas is elevated LFC (e-LFC). The stronger the chemoreflex modulatory influence on e-LFC, the more likely the coupling spectral dispersion narrows, yielding narrow band e-LFC (i.e., metronomic oscillations with a relatively fixed frequency). Narrow band e-LFC is induced by high altitude, heart failure, and predicts central apnea induction during positive pressure titration. The development and progression of heart failure is associated with fragmented sleep and heightened chemoreflex sensitivity. We predict that HFC will decrease and narrow band e-LFC will emerge and increase with worsening heart failure. These spectral biomarkers should change dynamically with heart failure progression or regression - viewing cardiac function through the window of sleep. Our experiments will take the following approach. We will establish the hemodynamic correlates of spectrographic stable and unstable sleep and night-to-night stability / variability of the ECG-derived biomarkers in adults and children in health, and in those with sleep apnea. Next, we will use a model of altitude-induced periodic breathing, which is relatively pure chemoreflex-mediated sleep apnea, to adjust the spectrogram’s parameters that allow the best sensitivity and specificity for detecting chemoreflex influences on sleep respiration. We will in parallel track the progress of heart failure patients from a hospitalization episode for 6 months, attempting to show that reductions of HFC and emergence or increases in narrow band e-LFC are sentinel biomarker events that predict worsening of heart failure (an early warning system). Finally, we will assess clinical outcomes based on spectral phenotyping of an archived data set, the Apnea Positive Pressure Long-term Efficacy Study. In the 2-year duration of the award, we will validate a unique biomarker of sleep, sleep-breathing, and cardiovascular biology that can be applied immediately to improve health outcomes. Development of a Cardiovascular Surveillance System in the CVRN National Heart, Lung and Blood Institute Kaiser Foundation Research Institute Oakland, CA 94612 This project will establish a surveillance system for cardiovascular disease in approximately 11 million health maintenance organization (HMO) members. The surveillance system will be initially established for coronary heart disease (CHD), heart failure (HF), and stroke. The broad goals of this project are to: 1. Establish a surveillance system for coronary heart disease (CHD), heart failure (HF) and stroke in the 15 centers of the National Heart Lung and Blood Institute (NHLBI) funded Cardiovascular Disease Research Network including therapeutic interventions, post-event outcomes and important risk factors and confounders. 2. Work collaboratively to establish and implement an aggregate database incorporating coronary heart disease CHD, HF, and stroke data from all 15 CVRN sites that can be used by CVRN investigators and other qualified research scientists to conduct studies related to comparative effectiveness and health disparities. 3. Identify standard criteria for coronary heart disease, heart failure and stroke clinical outcomes, as well as all components noted in goal #1 to enable data aggregation 4. Determine the most recent 10-year trends in the rates of acute myocardial infarction and stroke hospitalization and their relationship to trends in risk factors, co-morbidities, therapeutic interventions, medications, and diagnostic modalities. 5. Demonstrate that the data can be used to address research questions regarding comparative effectiveness and novel methods of monitoring health disparities, areas that have been identified as RC2 topics by NHLBI. This project will result in a surveillance system in a consortium of 15 geographically diverse health plans that provide health care to about 11 million people, nearly 4% of the U.S. population. This surveillance system will be significantly larger than other existing cardiovascular surveillance efforts in the U.S. and includes a population that is diverse in race/ethnicity and sociodemographic characteristics. The surveillance system will include for CHD, HF, and stroke electronically available data on risk factors, co-morbidities, prescription medications, therapeutic interventions, and laboratory testing, and physician and patient characteristics. These data can be utilized to provide timely surveillance reports for CHD, CF, and stroke; a comprehensive description of a patient’s longitudinal course both prior to and subsequent to development of CHD, CF, and stroke; and enable research questions to be addressed that assess the relationship of these variables to the course of disease as well as to address research questions relating to comparative effectiveness and to disparities in medical treatment and outcomes. Novel Imaging to Predict Cardiovascular Events in Diabetes National Heart, Lung and Blood Institute Mount Sinai School of Medicine of NYU New York, NY 10029-6574 Novel non-invasive imaging tests have been developed to characterize atherosclerotic plaque burden and metabolic activity (inflammation). However, the value of these atherosclerosis imaging technologies for predicting coronary heart disease (CHD) and stroke events has not been evaluated in prospective studies. Proposed is a study to conduct noninvasive imaging and longitudinal follow-up in a high risk cohort of patients with diabetes by utilizing the recruitment network, events follow-up protocol and adjudication committee assembled by the NHLBI-sponsored FREEDOM Trial (Future REvascularization Evaluation in patients with Diabetes mellitus: Optimal management of Multivessel disease - HL071988). Specific aims of our study are (1) to determine the association of atherosclerotic plaque burden and the risk of CHD and stroke events and all cause-mortality; (2) to determine the association between traditional CHD risk factors and atherosclerotic plaque burden; and (3) to determine the association between plaque burden and plaque inflammation. In order to accomplish these aims, we will recruit 380 diabetic patients with multi-vessel coronary disease from eleven greater New York metropolitan area hospitals. Patients will complete a baseline study visit at Mount Sinai School of Medicine (MSSM) to assess plaque burden and plaque inflammation by magnetic resonance (MR) (contrast and non contrast) and fluorodeoxyglucose (FDG)-positron emission tomography (PET) imaging. Additionally, questionnaires will be administered, a physical examination conducted and blood specimens collected to measure hemostatic and inflammatory markers. Patients will be actively followed for 36 months through annual inperson study visits and bi-annual telephone follow-up. When events (mortality, non-fatal MI and non-fatal stroke) are identified, hospital charts and death certificates will be reviewed by an adjudications committee, blinded to the baseline measurement values. Changes in plaque burden and inflammation will be assessed through MR and FDG-PET imaging, respectively, at the 36 month follow-up visit again at MSSM. The proposed study will provide the unique opportunity to assess atherosclerotic plaque burden as a predictor for clinical events in a high risk patient cohort. Data from this study will not only advance our understanding of the aggressive atherosclerotic process associated with diabetes but will also provide us with a strategy to combine novel noninvasive approaches to better follow the effects of medical and revascularization therapy in the diabetic patient. It is our expectation that data from the proposed study will be utilized to evaluate and improve existing treatment and help guide the development of effective new therapies aimed at reducing CHD and stroke events and improving survival in high risk diabetic patients. Pneumocystis jirovecii and macrophages in COPD National Heart, Lung and Blood Institute University of Kentucky 109 Kinkead Hall Lexington, KY 40506-0057 Airway inflammation, airway remodeling, colonization with microorganisms, and parenchymal destruction are hallmarks of chronic obstructive pulmonary disease (COPD). In addition to cigarette smoking, infectious pathogens likely contribute to the decline in pulmonary function in COPD patients. The inflammatory process in patients with COPD displays a distinct pattern of inflammatory mediators and immune cells that are involved that are similar to the pattern seen in response to Pneumocystis jiroveci (PC). Evidence has now emerged on the importance of macrophage phenotype in COPD patients. Macrophages account for the majority of inflammatory cells recovered from bronchoalveolar lavage from COPD patients and are localized to sites of alveolar destruction. Further, the IL-4/IL-13 alternatively activated macrophage phenotype (AAM) has been implicated in several chronic lung diseases. We propose in this study to evaluate the relationship between the AAM and PC in lungs of COPD patients in the Lung Tissue Research Consortium. In 3 Aims we will (1) correlate PC colonization with the presence of AAMs in lung tissue samples, (2) determine through immunohistochemistry how the presence of PC correlates to the precise localization of macrophage phenotype and fibrosis, and (3) determine how PC burden and AAMs correlate to clinical outcome measurements. This project will investigate a novel mechanism of pathogenesis which may provide targets for potential future therapeutic interventions for patients with COPD. Pathological Consequences of the Plasminogen System National Heart, Lung and Blood Institute University of Notre Dame 940 Grace Hall Notre Dame, IN 46556 The long-term goal of this proposal is to identify functions and determine mechanisms of the fibrinolytic system, and its inhibitors, in physiological and pathological processes utilizing cell-based and in vivo models. The availability of mice with deficiencies of genes of the fibrinolytic system has resulted in direct analyses of the role of these proteins in a number of biological events. Studies have indicated that a PAI-1 deficiency diminishes angiogenesis in tumor models. Further, our laboratory has shown that endothelial cell (EC) signaling and function are regulated by PAI-1/LRP interactions. The current application will further elucidate effects of PAI-1 on cell signaling pathways and determine the importance of PAI-1/LRP interactions in both cellular and physiological events. As a result of these observations, the following studies are proposed: (1) Determine the effects of a PAI-1 deficiency on murine EC JAK/STAT signaling and cell cycle progression. These studies will assess STAT and JAK expression profiles and activation status in proliferating wild-type (WT) and PAI-1-/- EC as well as the extent of nuclear translocation of STAT. The addition of rPAI-1 and mutants will determine which functional domains of PAI-1 regulate the activation status of this pathway. Additional studies will determine effects on cell migration. Downstream effects on cell cycle progression will also be investigated. The hypothesis is that a PAI-1 deficiency will affect JAK/STAT signaling and downstream cell cycle progression, and that these effects are mediated by PAI-1/LRP interactions. (2) Characterize early and late stage events of cardiac fibrosis in PAI-1-/- and uPA-/-/PAI-1-/- mice. Recent studies have shown that PAI-1-/- mice develop cardiac fibrosis, which may be mediated by dysregulated uPA or chronic activation of the Akt pathway, the result of altered PAI-1/LRP interactions. The studies proposed will initially characterize cardiac fibrosis in PAI-1-/- and uPA-/-/PAI-1-/- mice in order to differentiate effects from uPA activity and PAI-1 functions independent of uPA inhibition in cardiac fibrosis phenotypes. The hypothesis is that cardiac fibrosis will be regulated by urokinase activity and other functions of PAI-I which will be further pursued in future studies of mice expressing functional mutations of PAI-1. Pilot Test of a Novel Behavioral Intervention on BP Control in HTN Patients National Heart, Lung and Blood Institute Pennsylvania State University-Univ Park 110 Technology Center Building University Park, PA 16802 Patients’ knowledge concerning their chronic illness has long been considered “necessary but not sufficient” to produce changes in risk-related behaviors. “Necessary” implies that patient knowledge is, therefore, a moderator of the effectiveness of behavioral interventions. However, researchers have tended to ignore patient education as a critical component of behavioral (or, for that matter, pharmacological) interventions. We propose to combine a behavioral intervention that we and others have found to be moderately effective in increasing blood pressure (BP) control in hypertensive patients - using a home BP monitor (HBPM) to obtain feedback regarding their BP control, and providing feedback to the health provider - with a systematic patient education component. We propose an intervention strategy that is meant to be usable as an adjunct to the HBPM and other interventions; one that will increase patients’ knowledge, and, we hypothesize, will therefore increase the effectiveness of the “parent” intervention (HBPM, in this case). Our proposal is for a randomized controlled trial (RCT), using a 2X2 factorial design in which we will test the effect of (1) a patient education intervention and (2) HBPM, on ambulatory BP in poorly-controlled hypertensive patients at 3 and 6 months. The education intervention is based on a technique called “Self-Paced Programmed Instruction” (SPPI), a method that has been remarkably effective at increasing knowledge concerning complex topics. Using a computer, a paragraph of content material is presented, followed by probe questions. When patients provide a correct response, they are immediately reinforced by positive feedback; an incorrect response loops the program to represent the materials, this time with hints; and the subjects then re-attempt the probe questions. The loop continues until a correct answer is recorded. In this manner, every subject achieves mastery over the requisite material. We posit that medication adherence (assessed objectively) will partially mediate the ambulatory BP outcomes; and that Self-Efficacy for the self- management of HTN will mediate medication adherence; we predict that self-efficacy will be enhanced by the mastery of the HTN-related materials, and by the reduction of ambiguity, which will lead to greater confidence in the patient’s decision-making processes. We predict that the SPPI - HBPM condition will have the greatest effect on ambulatory BP, compared to the other three groups. CRP, Diabetes, Atherothrombosis National Heart, Lung and Blood Institute University of California Davis Office of Research – Sponsored Programs Davis, CA 95618 In the previous proposal, the central hypothesis was to determine if CRP promotes atherothrombosis by effects on both endothelial cells and monocytes. We have now executed all four aims of this proposal and have advanced the field with regards to the vascular effects of CRP. In summary, we have elucidated the molecular mechanism by which CRP inhibits eNOS (in-vitro and in-vivo), we have documented the role of Fc-gamma receptors in the biological effects of CRP on endothelial cells, macrophages and in Wistar rats. Furthermore, we have elucidated the mechanism of CRPinduced monocyte adhesion under shear stress, and finally we have confirmed in-vivo, in Wistar rats, that CRP has effects that promote atherosclerosis including stimulation of NADPH-oxidase, superoxide, MPO release, oxidized LDL uptake, tissue factor, MMP-9 release from macrophages and decreased vasoreactivity. Diabetes is a proinflammatory state that is characterized by high CRP levels. However, there is a paucity of data examining the role of CRP in promoting the pro-inflammatory state in diabetes. We have shown in exciting and novel preliminary data that CRP exacerbates in-vivo the pro-inflammatory, pro-oxidant effects in the diabetic milieu (spontaneously diabetic BB rat). Thus, in this competing renewal, we wish to further explore the effects of CRP on diabetes and atherothrombosis. To this end, we are proposing two specific aims. In specific aim 1, we will continue to expand our exciting preliminary findings that CRP accentuates the pro- inflammatory, pro-oxidant state in the diabetic BB rat. In this model, we will confirm if CRP exacerbates in-vivo the pro-inflammatory, pro-oxidant effects in the diabetic milieu and also elucidate the molecular mechanism (s) by which CRP exerts these effects by employing in-vivo siRNA and antisense oligonucleotides to the different pathways identified. Based on findings largely from our group and others, that CRP promotes a pro-coagulant phenotype, in Specific Aim 2, using the spontaneously diabetic BB rat, we will now test in-vivo the effect of CRP on thrombosis in the diabetic milieu. Also, we will elucidate the mechanism (s) by which CRP promotes atherothrombosis in the diabetic state. We believe these studies will provide further novel data in support of the hypothesis that CRP promotes atherothrombosis in-vivo and a procoagulant, pro-inflammatory phenotype in diabetes. Probing into the molecular mechanisms by which CRP augments oxidative stress and inflammation in the diabetic milieu will eventually lead to therapies targeted at reducing inflammation and oxidative stress in diabetes and resulting in a decrease in vasculopathies. Genetic control of gene expression during innate immune activation National Heart, Lung and Blood Institute University of Washington Office of Sponsored Programs Seattle, WA 98195-9472 Innate immune responses are induced by specific interactions between pathogen-associated molecules and Toll-like receptors (TLRs), and are critical to host defense. Recent studies have shown a role for TLR7 and TLRS in innate immune responses to viral infection. However, it is unknown to what extent these innate immune responses are heritable and what loci might affect this heritability. Our overall hypothesis is that heritable variation exists in gene expression levels measured during an innate immune response to virus-associated molecules. We propose to study this hypothesis in the context of innate immune responses to synthetic agonists specific for TLR7 (imiquimod) or both TLR7 and TLR8 (R848). First, we will determine genome-wide heritability of R848- induced changes in gene expression using a classical twins study. We will then identify quantitative trait loci (QTL) that control heritable variation in TLR7-induced gene expression in B-lymphoblastoid cell lines (B-LCL) isolated from ‘HapMap’ trios, and we will fine-map the functional polymorphisms within these QTL in a large cohort of healthy individuals. Finally, we will apply in vitro assays of promoter function and RNA processing to understand how these polymorphisms affect gene expression. The proposed studies will identify specific genetic loci controlling heritability of TLR7/8- mediated innate immune responses and more broadly, basic mechanisms underlying the genetic control of gene expression in environmentally perturbed cells. Results from these studies will provide novel potential markers of susceptibility for both common and emerging viral infection and will characterize a new experimental pathway for discovery of functional genetic variation affecting responses to environmental stimuli. Negative regulation of platelet activity National Heart, Lung and Blood Institute Bloodcenter of Wisconsin, Inc. P.O. Box 2178 638 N 18th St Milwaukee, WI 53233 Platelets are anucleate bodies that circulate in the bloodstream and play a very important role in vascular hemostasis. Platelets circulate in a quiescent state in intact blood vessels but they adhere to and become activated by exposed extracellular matrix in a damaged vessel. Activated platelets spread out and bind to one another (i.e., form a thrombus), so as to close up the damaged area and initiate wound healing. Excessive bleeding occurs when platelets are deficient or hypo-responsive and pathological thrombus formation, which can result in occlusion of blood vessels and cause myocardial infarction or stroke, occurs when platelets are hyper- reactive. Because the extent of platelet activation is such an important determinant of vascular pathology, it is very important to understand how platelet activation and aggregation are regulated. The platelet contains several cell surface and intracellular proteins that coordinate transmission of activating and inhibitory signals into the platelet interior, and it is the balance of stimulatory and inhibitory cues that ultimately determines the platelet activation state. Whereas much has been learned in recent years regarding the platelet receptors and signaling cascades that contribute to platelet activation, key components of which are members of the Src Family of protein tyrosine Kinases (SFK), the molecules and pathways responsible for keeping platelet activation held in check remain poorly defined. We and others have previously demonstrated that Platelet Endothelial Cell Adhesion Molecule-1 (PECAM-1, also called CD31) and the SFK, Lyn, are negative regulators of platelet activation. Previous studies in our laboratory have also begun to characterize, in platelets, a pathway by which C-terminal Src kinase (Csk) is recruited to sites of SFK activity by Csk Binding Proteins (CBP), so that Csk may carry out its important role as a negative regulator of SFK activity. In particular, our preliminary studies have revealed that a member of the Downstream of kinase (Dok) family, Dok-2, is a CBP in platelets. The overall goal of this new grant application is to develop a more complete list of inhibitory molecules in platelets, to thoroughly characterize the signaling pathways in which these molecules function, and to improve our understanding of how these molecules and pathways interact with one another to ultimately influence the platelet activation state. Specifically, over the next three-year period, we propose to: (1) determine the contribution of the inhibitory SFK, Lyn, to the inhibitory function of PECAM-1 and (2) determine how Csk binding to Dok-2 contributes to negative regulation of platelet activation. Together, these studies comprise a coordinated, focused research program designed to improve our understanding of negative regulation of platelet activation by identifying, characterizing, and examining the interactions between inhibitory receptors and signaling molecules in platelets, such as PECAM-1, Lyn, and Dok-2. We expect that information derived from this investigation has the potential to lead to improved diagnosis and treatment of bleeding disorders, myocardial infarction and stroke. Amplification of Antiviral Innate Immunity by Suppressor of Virus RNA (svRNA) National Institute of Allergy and Infectious Diseases Cleveland Clinic Lerner COL/MED-CWRU JJN5-01 Cleveland, OH 44195 RNA cleavage is a fundamental and ancient host response for controlling viral infections in both plants and animals. In higher vertebrates, including humans, RNA cleavage as a means of controlling viruses is mediated by the type I interferons (IFN) through its effector, the uniquely regulated endoribonuclease, RNase L. RNase L is activated by unusual 2’,5’-linked oligoadenylates (2-5A) produced during viral infections. 2-5A activates RNase L resulting in cleavage of host and viral RNAs within single stranded regions, predominantly after UU and UA. As a result of its specificity, RNase L produces small, highly structured RNA cleavage products. In 2007 we reported that RNA cleavage products obtained from digestion of self RNA by RNase L activated RIG-I-like receptors (RLR) resulting in amplification of type I IFN synthesis. These RNA cleavage products represent a novel class of small RNA molecules named “Suppressor of Virus RNA” (svRNA). Our GOALS in this project are to clone, identify and probe the functions of svRNAs generated from both host RNA and from viral RNA. Our HYPOTHESIS is that svRNAs are essential to host defense against a wide range of viruses that are pathogenic for humans. Our Specific Aims are: (1) to isolate and identify svRNA liberated by RNase L from host and viral RNA, we will cleave HCV RNA with purified RNase L and clone small RNAs that bind to RLRs, and cleave cellular (self) RNA in intact cells treated with 2-5A and clone small RNAs that bind to RLRs; (2) To characterize activation of RIG-I and MDA5 by svRNAs we will perform ATPase activation studies, determine the kinetic parameters for svRNA interactions with RIG-I and MDA5 by surface plasmon resonance, measure conformational changes in RIG-I and MDA5, and establish the sequence and structural requirement of svRNA for activation of RIG-I and MDA5; and (3) to determine the role of svRNA in antiviral innate immunity we will identify svRNAs in HCV infected cells, and determine the antiviral effects of svRNAs in mice. Our recent studies suggest an essential role of svRNAs in the antiviral state in higher vertebrates. In the proposed studies we seek to obtain a fundamental understanding of this important pathway as it relates to host defense against viruses. Therefore, there are cogent and health-related justifications for these studies. Interconnectivity between genome packagaing and other viral functions National Institute of Allergy and Infectious Diseases University of California Riverside 900 University Ave Riverside, CA 92521 Information gleaned from recent studies with single-stranded, positive-sense RNA viruses pathogenic to humans and animals (polio and alphaviruses) and insects (flock house virus; FHV) revealed that the mechanism of genome packaging in these viral systems is functionally coupled to replication. Recently our laboratory adopted a novel in vivo system referred to as Agrobacterium-mediated transient expression (agroinfiltration) to study encapsidation in plants. This system not only allowed efficient expression of viral genome components either autonomously or synchronously in plant cells, but also effectively uncoupled replication from packaging. Application of the agroinfiltration system to brome mosaic virus (BMV, a plant infecting RNA virus) allowed us to hypothesize that packaging in BMV is also functionally coupled to replication. In addition, co-expression of BMV and FHV in plant cells using agroinfiltration revealed that for specific RNA packaging to occur, synchronization of replication and transcription of coat protein (CP) mRNAs from homologous replication machinery is obligatory. This two-year exploratory project is designed to evaluate, at the sub- cellular level, the intimacy of replication to packaging. An agroinfiltration system competent to synchronously infect the same plant cell with BMV and FHV will be used through out these studies. Our working hypothesis is that translation of CP followed by virus assembly occurs very close to the sites of viral replication. Thus in Aim 1, we propose to temporally and sequentially localize and identify the sub-cellular compartment(s) where translation of CP and virus assembly of BMV and FHV occurs. In addition to the molecular and biochemical characterization, delineation of CP translation and virus assembly sites at the sub-cellular level will be investigated by electron microscopy using a novel Silver Enhancement-Controlled Sequential Immunogold technique (SECSI). BMV and FHV differentially replicate on the outer membranes of endoplasmic reticulum (ER) and mitochondria respectively. We found that packaging is non-specific when BMV CP or FHV CP was expressed either transiently or via heterologous replication. Thus, experiments outlined in Aim 2 are focused in addressing, for the packaging specificity occur, whether viral progeny RNA need to be tethered to the same membrane near which it’s CP is being actively synthesized. This will be investigated by retargeting the FHV replicase complex to the ER, where the synthesis of FHV CP from genetically engineered BMV RNA will be synchronized. At the completion of the project we should know whether translation of CP and assembly of virions occur at or near the replication sites and whether tethering of viral progeny RNA to the same membrane near which it’s CP is being actively synthesized is obligatory to confer packaging specificity. Results obtained from this research proposal would improve our understanding concerning the mechanism of replication-coupled packaging in RNA viruses pathogenic to humans, animals and plants. Targeting pDCs for the Generation of Effective Anti-HCV CD8+ T-Cell Immunity National Institute of Allergy and Infectious Diseases Baylor Research Institute Dallas, TX 75204 Hepatitis C virus (HCV) infection represents a significant global health-care problem, which is forecasted to become worse in the coming years. In the developed world, infection with HCV is responsible for 50-75% of all cases of liver cancer and accounts for two- thirds of all liver transplants. To date there are no effective vaccines to HCV and current systemic therapies have significant side effects. There is a need for novel therapeutic approaches to the treatment of chronic hepatitis C infection. The establishment of potent anti-viral CD8+ T-cell immunity has been shown to be the central mediator of viral clearance. Like many chronic infections however, such responses to HCV have been difficult to establish. Plasmacytoid dendritic cells (pDCs) are a subset of DCs which are specialized for viral recognition and the initiation of anti-viral immunity. We have shown that pDCs have specialized antigen processing compartments (MICs) which permit them to rapidly cross-present viral antigens and stimulate protective CD8+ T-cell responses. Furthermore we have demonstrated that targeting antigens to this compartment in an activated pDC is sufficient to initiate potent CD8+ Tcell responses. Our overall hypothesis is that Hepatitis C viral antigens targeted to the specialized class-I processing compartment (MIC) of pDCs will be efficiently cross-presented and drive anti-viral CD8+ T cell expansion. We propose to address this hypothesis though three aims; Aim 1: To determine if receptor trafficking into the MIC is sufficient to generate strong CD8+ T-cell responses against Hepatitis C viral antigens. We will address this hypothesis by (1) Generating antibody antigen conjugates for in vitro targeting to the MIC. (2) Assess the effect of these reagents on pDC activation (3) Demonstrate MIC targeting (4) Demonstrate cross- presentation of targeted antigen. Aim 2: To determine if antigen processing and cross-presentation by the pDC results in an expanded antigen specific T-cell repertoire. We will (1) Demonstrate that antibody antigen conjugates can induce potent HCV antigen specific CD8+ T cells responses in vitro (2) Determine the optimal CpG derivative (CICs) to enhance viral antigen specific CD8+ T cell responses (3) Make both a quantitative and qualitative assessment of total T-cell epitopes generated during cross-presentation of targeted viral antigens by pDCs. (4) Assess the quality of viral epitope response in patients chronically infected with HCV. Aim 3: The generation of a multi-epitopic adjuvant-based pDC targeting constructs We will (1) Generate fusion proteins of anti-BDCA2 and the immunodominant TC1 viral epitopes identified in Aim 2. (2) Conjugate immunostimulatory CIC sequences to this second generation pDC targeting construct. (3) Demonstrate that a multi-epitopic pDC targeting constructs can induce potent HCV antigen specific CD8+ T cells responses in vitro in patients chronically infected with HCV. Overall significance: This study provides a novel approach for therapeutic HCV vaccine development. Type II secretion system of P. aeruginosa in acute lung infection National Institute of Allergy and Infectious Diseases University of Florida 219 Grinter Hall Gainesville, FL 32611-5500 Acute lung infection due to Pseudomonas aeruginosa is a common cause of death in hospitalized patients. This organism is also the major cause of death in Cystic Fibrosis. A number of virulence factors have been proposed to lead to these poor outcomes. We wish to examine the role of the toxins secreted by this bacterium’s type II secretion system during lung infections. Research in this area has been inconclusive, with most recent efforts being focused on the role of the type III secretion system. However, using Toll-like- receptors 2,4 -/- mice, we demonstrate a significant role for the T2SS in death due to lung infections. We therefore wish to define how this occurs. Our aims are to identify the outer membrane protein pore through which toxic factors are secreted, identify the secreted toxic factors using an unbiased proteomics approach and examine whether there is an important role for this system in other virulent P. aeruginosa strains during lung infections. During the course of these studies we will also examine whether secretion can be blocked by antibody raised against the secretion pore. We will utilize conventional molecular biology techniques of mutagenesis and complementation as well proteomic analyses of the secreted proteins to ascertain whether there are unknown toxic factors that are being secreted or whether it is the classic virulence factors that cause death. These studies reexamine a critical question that has been left largely unanswered, and will provide valuable information on possible ways of preventing death cause by the toxins produced b this system. Broad Neutralizing Monoclonal Antibodies From HIV Controllers National Institute of Allergy and Infectious Diseases University of Maryland Baltimore 620 W Lexington St 4th Fl Baltimore, MD 21201-1508 The long-term goal of this project is to identify novel monoclonal antibodies (mAbs) that broadly recognize the HIV-1 envelope glycoprotein (Env) and block infection in vitro to guide vaccine development. This goal will be pursued in a cohort of HIV-1 infected individuals who control their infections in the absence of anti-retroviral therapy (Natural Virus Suppressors/NVS) and who have circulating broadly neutralizing antibodies (broad nAbs). A key element of our approach is the development of a new assay to census Env-specific memory B cell clones (BMem) that allows the rapid and direct cloning of full-length monoclonal antibodies (mAbs). These mAbs will be characterized for epitope specificity and neutralization breadth to create clonal profiles of the BMem that are generated during the control of HIV-1 infection. This information will be used to test the hypothesis that neutralization breadth is determined by a polyclonal response comprised of a mosaic of neutralizing specificities as opposed to a pauciclonal response comprised of one or a very few neutralizing specificities. Testing this hypothesis is key to our long-term goal of identifying novel mAbs that broadly recognize Env and block infection in vitro to guide vaccine development against HIV-1. There are two specific aims. Aim 1- To develop clonal specificity profiles of Env-specific BMem from NVS who have ongoing broadly neutralizing antibody responses- Clonal specificity profiles of anti-Env responses will be determined by limiting dilution analysis, mAb isolation, and epitope mapping to determine the relative dominance of BMem clones specific for different Env-epitopes. Aim-2- To compare neutralization breadth between plasma antibodies and mAbs representing a full clonal profile of BMem to determine the number of mAbs that must be pooled to reconstruct the neutralization breadth of the circulating antibody pool. This data will be used to determine the clonality of an ongoing broad nAb response. This aim will complete testing the hypothesis that neutralization breadth is determined by a polyclonal response comprised of a mosaic of neutralizing specificities as opposed to a pauciclonal response comprised of one or a very few neutralizing specificities. Currently there is no vaccine against AIDS. The work proposed in this application will investigate how some people control HIV-1 infection for many years without anti- retroviral drug therapy. This information should be useful in making a vaccine against AIDS. Nonpayment for Preventable Complications: Impact on Hospital Practices and Health National Institute of Allergy and Infectious Diseases Harvard Pilgrim Health Care, Inc. Boston, MA 02215 Financial incentives, such as pay-for-performance (P4P) programs, are increasingly being used to improve physician behavior. However, the impact of these programs on improving quality of care for patients have been mixed, with some studies showing modest gains and others reporting little to no improvement on quality of care measures. Furthermore, unintended consequences of P4P programs have been demonstrated, including larger financial rewards for those hospitals with higher performance at baseline and significant financial losses for hospitals that serve large minority populations. As of October 1, 2008, Medicare will implement the use of a new financial mechanism-nonpayment for preventable complications (NPPC)-which is a “stick” rather than a “carrot”. Medicare will no longer pay hospitals for treating certain healthcare associated infections (HAIs) that arise in patients if they are not present on admission. Our proposed research is unique and timely. There are no data available on the impact of a NPPC policy intervention that is being implemented by one of the largest payers in the U.S. Despite lack of evidence for its efficacy, it is hoped that financial disincentives will motivate hospitals and providers to focus their efforts on reducing HAIs. While the goal is certainly worthy, the mechanism being used to motivate change should be rigorously evaluated to ensure that it achieves its intended consequences without the occurrence of unintended consequences. Our research will provide a rich understanding of the potential impact, both positive and negative, of NPPC on patient care and outcomes. The long-term goal of this proposal is to assess the overall impact NPPC on patient care and outcomes. In this two-phase study, we will first conduct qualitative interviews to identify key elements that may affect hospital practices and rates of HAIs. In the second phase, we will develop, pilot, and validate a survey instrument based on our qualitative research findings in order to conduct a future survey of infection preventionists to assess the perceived impact of NPPC on hospitals in the U.S. Thus, we propose the following specific aims: 1. To identify key factors that may affect infection prevention practices in the context of NPPC. 2. To develop, pilot, and validate a survey instrument to examine the perceived impact of NPPC on behaviors and practices in hospitals. HIV-envelope-specific CD4+ T-cell activation and functional potentials National Institute of Allergy and Infectious Diseases St. Jude Children’s Research Hospital Memphis, TN 38105 Despite decades of research, the development of a successful HIV-1 vaccine has not yet been achieved. A better understanding of the functions of activated lymphocytes is therefore desired. The long-term objective of our research is to comprehend the full potentials of HIV-1- envelope-specific immune cells. CD4+ T-cells contribute to HIV-1 control by supporting antibody production by 8-cells and the activation/ maintenance of CD8+ T-cells. However, based on our recent data, it appears that envelope-specific CD4+ T-cells may additionally contribute directly to the control of virus-infected cells, independent of 8-cell or CD8+ T-cell activity. The studies proposed here will determine how these CD4+ T-cells confer their ‘protector’ effect. Specific Aim: To determine the phenotype, cytokine secretion capacities, and killer potentials of the HIV-1 envelope-specific CD4+ T-cells that protect against envelope- recombinant virus in the absence of 8-cell or CD8+ T-cell functions. Experiments are designed to fill fundamental gaps in our understanding of how virus is controlled by the immune system. Results from these experiments may be invaluable to the construction of new, successful HIV-1 vaccines designed to capture the full potentials of the immune response. National Institute of Allergy and Infectious Diseases Tulane University of Louisiana 6823 St Charles Ave New Orleans, LA 70118 CD4+ helper T cells specific for human immunodeficiency virus type 1 (HIV-1) are associated with control of viremia. Nevertheless, vaccines have not been effective thus far, at least partly because sequence variability and other structural features of the HIV envelope glycoprotein deflect the immune response. Previous studies indicate that CD4+ T-cell epitope dominance is controlled by antigen three-dimensional structure. Three disulfide bonds in the outer domain of gp120 were individually deleted in order to destabilize the three-dimensional structure and enhance the presentation of weakly immunogenic epitopes. Unexpectedly, upon immunization of mice, the CD4+ T-cell response was broadly reduced and antibody titers were sharply increased for two of the disulfide variants. For one variant (deletion of the 296-331 disulfide bracketing V3), viral neutralizing activity was increased, but reactivity was narrow. For another variant (deletion of the 378-445 disulfide bracketing V4 and part of the bridging sheet), the antibody exhibited significant CD4-blocking activity. The changes in the immune response are most likely due to shifts in the pathways of antigen processing that result in the priming of fewer but more helpful T cells. In the proposed research, the disulfide variants will be reconstructed in the gp120 of distinct Clade B and Clade C HIV strains and in the gp120 of an SIV strain in order to test the generality of the result. Disulfide variants will be characterized by binding to monoclonal antibodies, circular dichroism spectroscopy with denaturation, limited proteolysis, deglycosylation, and isothermal titration calorimetry of CD4 binding. Mice will be immunized with the variants. CD4+ T-cell proliferative and cytokine responses will be mapped for individual mice and, in a novel analysis, will be correlated with antibody reactivity to proteins and peptides. The resulting epitope-specific T-B correlations will be used to identify cellular interactions that support antibodies directed against protective and unprotective epitopes. Rabbits will be immunized, and viral neutralization will be analyzed, with the expectation that antisera raised by the disulfide-deletion variants will have increased viral neutralization. The proposed research is unique in that it exploits T-B relationships in order to engineer an improved antibody response. Dissecting the origin and the function of the cutaneous dendritic cell network National Institute of Allergy and Infectious Diseases Mount Sinai School of Medicine of NYU New York, NY 10029-6574 Highly specialized professional antigen presenting cells are distributed throughout the skin and include epidermal Langerhans cells (LCs) and dermal dendritic cells (DCs). Our laboratory established some unique properties of cutaneous DCs. We discovered that in contrast to lymphoid organ DCs, LCs fail to develop in mice that lack the receptor for macrophage ny stimulating factor (MCSFR) (Ginhoux et al. Nat immunol 2006). We established that in contrast to most dc populations, LCs are maintained by radioresistant hematopoietic precursors that have taken residence in the skin in the steady state (Merad et nature immunology 2002; Merad et al. Nature medicine 2004). We also found that a subset of dermal DCs derive from radioresistant precursors, while the majority derives from circulating radiosensitive precursors (Bogunovic et al. Jem 2006). More recently, we identified a novel population of dermal DCs that express the c-type lectin receptor langerin, thought to be a LC hallmark the skin. In contrast to LCs, dermal langerin+ DCs are recruited from the blood and sojourn briefly in the skin before migrating to the lymph node charged with skin antigens (Ginhoux et al. Jem 2007). These results underline the complexity of the cutaneous dc network system, but “the raison d’etre” and the mechanisms that regulate the development of this complex system is elusive. In this grant application, we propose to dissect the origin of dc populations in the skin, identify the key molecules that control their development and examine the contribution of each dc compartment to skin immunity. Preliminary data suggest that a wave of LC precursors seed the epidermis during embryonic life. Thus in aim 1, we propose to ex the potential of these embryonic precursors to maintain LC homeostasis throughout life. Mice that are deficient for MCSFR or tgfb1 lack epidermal LCs but the exact role of MCSF and tgfb1 in LC ontogeny is unknown. In this aim, we propose to examine how these molecules control LC development. Preliminary data also suggest that distinct precursors and differentiation pathways control the development of dermal langerin+ and dermal langerin- DCs. Thus in aim 2, we propose to identify the dedicated precursor and the mechanisms that control the development of dermal dc subsets. Finally, we believe that such complex dc network has developed to ensure skin integrity and in aim 3, we propose to examine the contribution of each DC compartment to skin immunity. Protein kinase A-dependent regulation of T cell accumulation in Lupus National Institute of Allergy and Infectious Diseases Wake Forest University Health Sciences Winston-Salem, NC 27157 Establishing how deficient PKA-I activity results in abnormal T cell effector functions is a key step in understanding the etiopathogenesis of T cell dysfunction in SLE. In T cells from normal subjects, IL-2 induced IL-13+ cell accumulation in vitro is inhibited by the strong PKA activator PGE2, whereas the weak PKA activator beta-agonist causes increased accumulation. In SLE subjects with a severe defect in PKA activity, both PGE2 and ISO cause a profound increase in IL-2 induced IL-13+ cell accumulation. This R21 application proposes to clarify the effect of defective PKA on regulatory features of T cell accumulation in SLE subjects. The hypothesis is that the subpopulation of SLE subjects with defects in PKA activity has exaggerated accumulation of type 2 cells when stimulated by betaagonist and PGE2. Further hypothesis is that experimental knockdown/expression of the PKA RI¿-subunit is sufficient to cause/reverse this effect. These hypotheses will be tested using a highly interpretive in vitro model and a well characterized cohort of SLE subjects. Results from these studies will provide novel insight into the regulation of T cell development of interest to the basic science of T cell biology, and advance our understanding of immune system regulation in SLE. HIV-1 Replication and Pathogenesis in Vivo National Institute of Allergy and Infectious Diseases University of North Carolina Chapel Hill Office of Sponsored Research Chapel Hill, NC 27599 The goals of this project are to define how HIV-1 interacts with pDC and to elucidate the role of pDC cells in HIV-1 replication and pathogenesis. As the major sensor of viral infections, altered pDC level/activity may play a critical role during HIV-1 disease progression. However, the role of pDC cells in HIV infection and pathogenesis is poorly understood, mainly due to the lack of robust in vivo models. The DKO-hu HSC model is ideal for this purpose. With a stable functional human immune system, functional pDC cells are developed in normal proportion in all lymphoid organs in DKO-hu mice. HIV-1 establishes persistent infection, with immune hyperactivation and depletion of human CD4 T cells. We have also shown that, during HIV-1 infection, PDC cells are productively infected, activated, depleted and functionally impaired in DKO-hu HSC mice. HIV-1 with the pathogenic R3A Env also efficiently activates PDC in vitro, correlated with its high binding affinity to CD4 receptor and coreceptors. Based on our preliminary findings and reports from SIV-infected monkeys or HIV-infected patients, I postulate that HIV-1 intimately interacts with PDC cells, and chronic engaging of PDC during persistent HIV infection will deplete or impair PDC activity. The reduced or altered PDC activity contributes to chronic HIV infection, hyper-immune activation and AIDS progression. First, we will investigate the proliferation and survival of pDC cells during early and late-chronic HIV-1 infection in DKO-hu mice (SA1a). Second, we will define the role of each relevant receptor (CD4, CCR5, CXCR4, BDCA2, TLR7 and TLR9) in pDC activation with genetic approaches. In addition, we will also define the signaling defects in pDC cells induced by HIV infection, by genetically analyzing the candidate signaling pathways (SA2a). Third, we will treat DKO-hu mice with the pDC-specific ILT7 mAb conjugated with the Saporin toxin, which specifically depletes pDC, to test the role of pDC during infection (SA3c). We will thus focus on the most fundamental questions of pDC cells in HIV pathogenesis. Elucidation of the mechanism by which HIV- 1 interacts with pDC cells and their role in HIV-1 infection and AIDS pathogenesis will facilitate not only our understanding of pDC biology in HIV pathogenesis, but also development of novel therapeutic strategies. Long Polar Fimbriae of Attaching and Effacing Escherichia coli National Institute of Allergy and Infectious Diseases University of Texas Medical BR Galveston 301 University Blvd Galveston, TX 77555 The expression of Attaching and Effacing Escherichia coli (AEEC) virulence factors is a tightly regulated process, and, in some cases, the identification of these factors has been difficult because they are either repressed in vitro or the conditions of expression are unknown. While it is evident that expression of certain virulence factors is strictly associated with human disease, the additional factors present in AEEC strains that are linked to their pathogenic process remain unclear. Lack of a full understanding of how the genes encoding these additional virulence factors are controlled is important, because, without this knowledge, we are unlikely to understand the overall pathogenic properties of AEEC strains. Thus, our objective is to determine how the Long Polar (LP) fimbriae in AEEC strains contribute to pathogenesis and to use these fimbrial-encoding genes as markers to detect virulent strains. The central hypothesis is that, in addition to the already characterized colonization factors (e.g., intimin-mediated adhesion), AEEC strains possess a highly regulated LP fimbriae, that plays a role in the colonization process, and although the genes encoding these fimbriae are widely distributed in pathogenic E. coli strains, some LP fimbriae types are found exclusively in specific AEEC strains. We will test this hypothesis through three specific aims, which are to: (1) Define whether Ler and H-NS act as a selective silencing/anti-silencing defense system that controls LP fimbriae expression in AEEC strains; (2) Identify the regulatory protein(s) controlling LP fimbriae expression in atypical EPEC and determine in a rabbit model the function of LP fimbriae during colonization; and (3) Characterize the distribution of the LP fimbrial gene clusters among AEEC strains and determine whether certain LP fimbrial subunit types are reliable markers of different pathogenic AEEC strains. To accomplish our aims, we will fully characterize the functions of Ler, H-NS, and atypical enteropathogenic E. coli-encoded regulators under in vitro and in vivo (infant rabbit colonization model) conditions and perform a detailed study of prevalence of the lpf genes in specific subsets of pathogenic AEEC strains. Our research work is innovative because it capitalizes on our findings regarding novel colonization factors in AEEC strains and their potential application in therapeutics and diagnostics. The results from studies of the regulatory networks controlling LP fimbriae expression have significance, because we will be able to identify fundamental differences to explain the tissue tropism of different AEEC strains and to determine whether silencing of LP fimbriae is an example of a defense system that AEEC strains have against horizontally acquired genes. In addition, the use of the rabbit model will give us new insight into the pathogenesis and colonization properties of AEEC strains. An understanding of the mechanisms underlying AEEC colonization to the gastrointestinal tract will not only further our knowledge of the pathogenesis of these organisms but also provide opportunities for reducing infection rates and improving treatment options against these biological agents classified as category B pathogens due t their potential use as a food safety threat. Plasmacytoid Dendritic Cells in HIV Pathogenesis National Institute of Allergy and Infectious Diseases Univ. of Med/Dent of NJ-NJ Medical School 185 S Orange Avenue Newark, NJ 07107 Deficient production of interferon-a (IFN-a) by natural IFN-producing cells (NIPC) is observed in patients with advanced HIV-1 infection. This deficient IFN-a production was found to be associated with, and predictive of, susceptibility to opportunistic infections. Although long-suspected to be a dendritic cell, progress was somewhat hampered by the lack of a definitive phenotype for the NIPC. NIPC have now been demonstrated to be identical to the plasmacytoid dendritic cell (PDC). PDC’s are believed to be important not only as professional IPC but also as vital links between innate and adaptive immunity. Deficient IFN-a production in HIV infection results from both decreases in numbers of circulating PDC as well as dysfunction in those cells present. This current study is organized in five specific aims; the first three involve studies of the basic biology of the PDC and the last two apply what has been learned about the function of PDC’s to understand how they become deficient in HIV infected patients. Peripheral blood PDC’s express very high constitutive levels of the transcription factor, IRF-7. These observations will be extended to evaluate the expression and function of the IRF-7 in PDC’s in different anatomical sites and determine the roles of IRF-7 vs. IRF-3 and IRF-5 in these cells. Cross-linking of receptors on the surface of PDC leads to down-regulation of their ability to produce IFN-a, a phenomenon that may also have physiological relevance in the HIV-infected patients. Studies are proposed to understand the mechanisms of this down- regulation and determine whether other functions carried out by PDC such as production of TNF-a and chemokines is similarly affected by the receptor crosslinking. Production of IFN-a by PDC’s does not require infection of the cells with virus; rather uptake of material by endocytosis appears to trigger the generation of IFN-a. Using fluorescent labeled infected cells or virus and confocal microscopy, the fate of the endocytosed material in vivo will be determined. In studies to better understand the mechanisms of deficiency in PDC in HIV-infected patients, studies will be undertaken to determine whether PDC’s are infected with HIV in vivo and whether they traffick from the blood to sites in the tissues. Finally studies are proposed to evaluate other functions of the PDC in HIV-1 infected patients including cytokine and chemokine production and activation of T cells as well evaluation of the IRF-7 function in these cells. Regulation and Action of APOBEC3G National Institute of Allergy and Infectious Diseases J. David Gladstone Institutes San Francisco, CA 94158 Apolipoprotein B mRNA editing enzyme, catalytic polypeptide-1 like 3G (APOBEC3G, A3G) corresponds to a host-derived cytidine deaminase that displays potent anti-retroviral activity. When incorporated into budding HIV virions, the A3G enzyme massively mutates nascent HIV DNA produced during reverse transcription in the next target cell thereby halting HIV growth. HIV counters these effects of A3G through its Vif gene product, which promotes accelerated proteasome-mediated degradation and partially impaired de novo synthesis of A3G. The intracellular depletion of A3G makes the antiviral enzyme unavailable for incorporation into progeny virions. Our recent studies have unveiled a second antiviral action of A3G operating in resting CD4 T-cells. In these T-lymphocytes, cellular A3G functions as a highly active post-entry restriction factor blocking the growth of both wild type and deltaVif forms of HIV. Whether this “Vif-resistant” anti-HIV defense mediated by A3G involves cytidine deamination or a different mechanism is currently unknown. Further, the mechanism by which this post-entry restricting function of A3G is forfeited when T-cells are activated remains incompletely understood. Similarly, little is known about how host cells safeguard their own DNA from the mutagenic effects of A3G. Finally, it remains unknown whether A3G exerts other key functions beyond these antiviral effects. In Specific Aim 1, experiments will be performed to decipher how A3G and the closely related A3F and A3B antiviral enzymes are regulated in cells. In Specific Aim 2, the mechanism of A3G action as a post-entry restriction factor in resting CD4 T-cells, the range of viruses affected by this restriction, and potential similar functions of A3F will be delineated. Finally, in Specific Aim 3, studies will be conducted to assess whether A3G mediates important nonantiviral functions in mammalian cells. These experiments will involve the preparation and analysis of mice lacking the functional analogue of the A3G gene. Together, this program of proposed experimentation promises to enrich our understanding of the biology of A3G as well as the related A3F and A3B enzymes. With such understanding, new therapeutic strategies for inhibiting HIV growth could emerge. Structure Studies on Proteins That Modulate IL-10 Action National Institute of Allergy and Infectious Diseases University of Alabama at Birmingham 1530 3rd Avenue South Birmingham, AL 35294 IL-10 is a multifunctional cytokine that regulates complex immune responses. Its normal function is to protect the host from uncontrolled inflammatory responses. However, IL-10 has also been implicated as an autocrine growth factor in several B-cell malignancies and stimulates B-cell mediated autoimmune disease. The normal and pathological functions of IL-10 are initiated by IL-10 receptor engagement and assembly into a signaling competent IL-10/IL-10R1/IL-10R2 complex. In addition to cellular IL-10 (clL-10), Epstein Barr virus (EBV) and cytomegalovirus (CMV) harbor viral IL-10 mimics (ebvlL-10 and cmvlL-10) in their genomes that activate the IL-10 signaling complex, resulting in overlapping and distinct biological properties. In the past funding period, we determined crystal structures of clL-10, cmvlL-10, and ebvlL-10 bound to the high affinity IL-10R1 chain. In this proposal we will use surface plasmon resonance, site-directed mutagenesis, NMR spectroscopy, X-ray crystallography, and FRET methods to study cellular and viral IL-10 receptor interactions. These studies will be complemented by the analysis of the cellular IL-10 homologs IL-22 and IL-20. The long term goal of this proposal is to derive a quantitative structural/computational model of IL-10 family signaling that might explain how cellular and viral IL-10s shape immune responses and allow the rational design of cytokine therapeutics. In addition to the contact named above, Will Simerl, Assistant Director; N. Rotimi Adebonojo; Peter Mangano; Lisa Motley; and Krister Friday made key contributions to this report.
The American Recovery and Reinvestment Act of 2009 (Recovery Act) included $10.4 billion in funding for the National Institutes of Health (NIH), an agency of the Department of Health and Human Services (HHS). Of the NIH Recovery Act funding, $8.2 billion was to be used to support additional scientific research and $400 million for comparative effectiveness research, including extramural research at universities and research institutions. NIH is comprised of the Office of the Director (OD) and 27 Institutes and Centers (IC), 24 of which make grant funding decisions. GAO was asked to report on how NIH awarded Recovery Act funds for scientific research and the information that NIH made available about the award of these funds. This report describes the (1) process and criteria NIH used to award extramural grants using Recovery Act funding, and (2) characteristics of Recovery Act extramural grants and the information made publicly available about these grants. GAO interviewed NIH officials in the OD and the three ICs that received the largest proportion of Recovery Act funds, and reviewed related documents, such as NIH guidance on awarding grants using Recovery Act funds. GAO also obtained and analyzed NIH data on all Recovery Act grants awarded as of April 2010. NIH used its standard review processes--peer review, which comprises two sequential levels of review by panels of experts in various fields of research, or administrative review--to award extramural grants using Recovery Act funds. These standard review processes were used for three categories of extramural grant applications: (1) new grant applications from Recovery Act funding announcements; (2) existing grant applications that had not previously received NIH funding; and (3) administrative supplements and competitive revisions to current active grants. For new grant applications submitted in response to Recovery Act funding announcements, NIH followed its standard peer review process. For existing grant applications, which had already undergone the peer review process, each of the three ICs GAO reviewed--National Cancer Institute (NCI), National Institute of Allergy and Infectious Diseases (NIAID), and National Heart, Lung, and Blood Institute (NHLBI)--selected additional applications for Recovery Act funding based in part on the amount of this funding available to each IC. To award administrative supplements, NIH conducted its standard administrative review at the IC level, and for competitive revisions NIH followed its standard peer review process. In reviewing applications, NIH used its standard criteria--scientific merit, availability of funds, and relevance to scientific priorities--plus three criteria for Recovery Act grants. These criteria were the geographic distribution of Recovery Act funds, the potential for job creation, and the potential for making scientific progress within a 2-year period. NIH's Recovery Act grant awards varied across three grant categories and other characteristics, and NIH made a variety of information about the grants publicly available. NIH data show that as of April 2010, about $7 billion of the $8.6 billion in Recovery Act scientific research and comparative effectiveness research funds had been awarded for 14,152 extramural grants. NIH awarded nearly $2.7 billion to make extramural grants for existing grant applications that had not previously received funding, slightly over $2.4 billion for new grant applications, and about $1.9 billion for administrative supplements and competitive revisions. NIH officials reported that the remaining Recovery Act scientific research funds will be awarded by the end of fiscal year 2010. At the three ICs GAO reviewed, the distribution of Recovery Act funds to the three categories of Recovery Act extramural grants varied significantly. For example, GAO found that as of April 2010, NIAID used 69 percent of its Recovery Act funds for existing grant applications that had not previously received NIH funding, while NCI used 31 percent for these existing grant applications. The average NIH Recovery Act extramural grant award was about half a million dollars, and about 25 percent of grantees were awarded $623,000 or more. Through NIH's Web sites, NIH and the ICs communicated a variety of information to the public about Recovery Act extramural grant awards, such as information about grantees and awarding ICs. HHS provided technical comments on a draft of this report, which GAO incorporated as appropriate.
The purpose of DOE’s contract with ORAU is to provide management and direction of programs through ORISE that maintain and advance science and education capabilities supporting DOE’s strategic goals in the areas of defense, energy, science, and the environment. To support these goals, ORAU carries out a range of activities for DOE, including administering workforce development programs to help ensure the future availability of scientists and engineers. These workforce development programs are intended to encourage individuals to enter STEM careers, complement students’ academic programs, and provide faculty with state- of-the-art information to use in the classroom, as well as developing a pool of talent from which federal agencies can draw for future employment. ORAU groups its workforce development activities into the following three categories: Research participation program: This program provides research experiences to students, postgraduates, faculty, and other participants. These activities make up the ORISE program. Fellowships and scholarships: Among other things, these programs provide financial assistance for students to obtain academic degrees in areas related to the sponsoring agency’s mission. Events, academies, and competitions: These programs, such as the National Science Bowl, a nationwide middle- and high-school science and mathematics competition, are designed to encourage participation in scientific and technological fields. In fiscal year 2014, most of the federal agency expenditures on workforce development activities administered by ORAU were for the ORISE program. In that year, federal agencies expended $193.8 million on the ORISE program, followed by $3.2 million for fellowships and scholarships, and $1.7 million for events, academies, and competitions. These expenditures supported 5,854 research participation appointments, 72 scholarships and fellowships, and 1,191 special event participants. ORISE research participants engage in a variety of subject areas, such as climate change, weather impacts on military projects, infectious and chronic diseases, computer simulations of potential terrorist attacks or natural disasters, and conservation measures for fish and wildlife. (See app.I for additional examples, by sponsoring agency, of project subject areas in fiscal year 2014.) Research participants may also be involved in developing briefing materials on their research for agency leadership, publishing the results of their research, participating in conferences, and obtaining other research-related experiences. Research participants are not considered federal employees or federal contractors and do not receive a salary. They instead receive stipends and certain other expenses to defray their costs of living during their appointments. Participant appointments can be full- or part-time and can last for weeks, such as in the case of a 10- to 12-week summer program, or years, such as in the case of a 1-year postgraduate program renewable for up to 4 additional years. From fiscal year 2010 through fiscal year 2014, DOE and other sponsoring agencies expended a total of $776.4 million for the ORISE program, with DOD, DOE, and HHS accounting for the majority of the expenditures. Over that period, annual program expenditures increased by 73 percent, and the number of annual appointments rose by 42 percent. Stipends accounted for the largest portion of agencies’ expenditures. Sponsoring agency expenditures per appointment varied, affected by factors such as the length of research participants’ appointments and the program support services sponsoring agencies had ORAU perform. During fiscal years 2010 through 2014, sponsoring agencies, which included 11 departments and other federal agencies, expended a total of $776.4 million for the ORISE program. DOD, HHS, and DOE collectively had the highest expenditures for the program (over 87 percent) over that period and had the highest number of appointments in fiscal year 2014 (over 88 percent). Within DOD, the Army was the primary component that sponsored ORISE research participants, accounting for 77 percent of DOD expenditures over the 5-year period and 70 percent of appointments in fiscal year 2014. Within HHS over the same time periods, the Food and Drug Administration (FDA) and Centers for Disease Control and Prevention (CDC) accounted for about 59 percent and 32 percent of expenditures, respectively, and about 53 percent and 36 percent of appointments. See figure 1 below and appendix II for further information on agencies’ expenditures and numbers of appointments. Sponsoring agencies’ total annual expenditures increased from $112.3 million in fiscal year 2010 to $193.8 million in fiscal year 2014, a 73 percent increase (61 percent when adjusted for inflation), and the number of appointments grew from 4,128 to 5,854, a 42 percent increase (see fig. 2 below and app. II for further information). An ORAU official who maintains data on appointments attributed the growth in the number of appointments to an increase in the program’s popularity, which led to the addition of new sponsoring agencies and increases in the number of appointments per sponsoring agency. Agency component officials we interviewed cited a variety of reasons for wanting to sponsor ORISE research participants, including: access to the ORISE program’s recruiters and network of connections administrative support from the ORISE program that the sponsoring agencies could not easily supply themselves, and the speed, flexibility, and relatively low overhead cost of the ORISE program. For example, an official who managed the research participation program at the U.S. Army Medical Research Institute of Infectious Diseases told us that the cost of hiring and managing staff to administer their own program would cost more than the overhead that they pay for the ORISE program. The average total expenditure per appointment in the ORISE program also increased from fiscal year 2010 through fiscal year 2014, from about $27,200 per appointment in fiscal year 2010 to about $33,100 per appointment in fiscal year 2014. Expenditures per appointment may have risen for a variety of reasons, such as changes in the average education level of research participants and the average length of their appointments. For example, the proportions of appointments at different education levels in fiscal year 2014 shifted compared to the proportions in fiscal year 2010, with recent graduate and postdoctoral appointments increasing 65 percent and 68 percent, respectively, while undergraduate appointments increased 12 percent. An ORAU official said that postgraduate appointments generally command higher stipends than undergraduate appointments. For example, according to information provided by FDA’s Center for Drug Evaluation and Research, monthly stipends at their center could be as high as $2,897 for currently enrolled undergraduate students and as high as $7,569 for postgraduates with PhD degrees. The official said that postgraduate appointments at their center also last longer than undergraduate appointments, resulting in higher expenditures per appointment. From fiscal year 2010 through fiscal year 2014, stipends—funds paid to research participants to defray their costs of living during their appointments—comprised the majority of agencies’ expenditures for the ORISE program. Sponsoring agencies’ other expenditures for the program included the following categories of expenses: Travel and other research participant expenses: Funds paid to research participants to cover particular expenses not covered by their stipends, such as expenses for travel to conferences or other appointment-related destinations. Program support and overhead: Funds paid to DOE to cover ORAU’s expenses for administering the appointment of research participants at agencies. These expenses included (1) program support expenses—direct expenses for services ORAU provides to agencies, such as managing recruitment activities, and (2) general and administrative expenses—indirect expenses such as building expenses, paid by agencies as a fixed percentage (negotiated by DOE and ORAU) of total expenditures on the ORISE program. Federal administrative and security charges: Fees paid to DOE by other sponsoring agencies, including (1) a federal administrative charge of 3 percent of an agency’s total expenditures on the ORISE program to offset DOE’s administrative expenses for work conducted on behalf of other agencies and (2) a charge applied to Strategic Partnership Projects to supplement DOE support for safeguards and security expenses. Figure 3 shows the percentage of expenditures for each category of expense. Sponsoring agencies’ expenditures per appointment in the ORISE program varied among agencies. In each year from fiscal year 2010 through fiscal year 2014, the lowest average expenditure per appointment for a sponsoring agency was $14,396 or less, and the highest average expenditure per appointment was $42,996 or more. For example, in fiscal year 2014, the Department of the Interior expended an average of $12,246 per appointment, while the Environmental Protection Agency expended an average of $44,099 per appointment. The proportions expended for different categories of expenses also varied. For example, data provided by ORAU showed that the proportion agencies expended on stipends ranged from 69 percent to 88 percent in fiscal year 2014. We identified the following factors that contributed to per-appointment expenditures varying among agencies: Research participants’ appointment terms. Differences in the terms that sponsoring agency components set for research participants’ appointments contributed to variation in expenditures per appointment. Some appointments lasted for days, weeks, or months, while others lasted for a full year or more. For example, FDA’s National Center for Toxicological Research’s Summer Student Research Program placed research participants in a 10-week summer program. In contrast, the National Library of Medicine’s Associate Fellowship Program placed research participants in 1- or 2-year residency programs. In addition, some appointments were full-time, while others were part-time. Methods of setting stipends. Officials at sponsoring agency components reported that they used differing methods to set research participants’ stipends. ORAU officials said that they sometimes provided advice to the agencies, but that the agencies ultimately set their own stipends. Almost all of the officials we interviewed at sponsoring agency components said that they considered applicants’ education levels when setting stipends, but they varied in the other factors they considered. For example, some used the Office of Personnel Management’s General Schedule pay scale, but others did not. The officials also differed in the extent to which they considered other factors, including prior work experience, salaries in the private and government sectors, stipends received by research participants in other programs, and geographic location. Some of the officials said that they set fixed stipends for all research participants, but others said that they determined stipends individually or made exceptions to fixed stipends when attempting to fill particular appointments. Other expenses covered. Sponsoring agency components chose to reimburse their research participants for different types and amounts of expenses not covered by their stipends. For example, in fiscal year 2014, the Air Force expended an average of $492 per appointment to pay for participants’ travel expenses, while the Environmental Protection Agency expended an average of $1,387 per appointment for that purpose. Other expenses that can vary among sponsoring agency components include payment of research participants’ tuition and fees at their academic institutions; reimbursement for the costs of moving to a research site; allowances for housing at research sites; payment of visa processing fees for foreign research participants; and purchase of safety equipment, books, and research supplies. Services performed by ORAU. Sponsoring agency components selected from and paid for many different services performed by ORAU. These services included managing recruitment activities, processing applications, making and monitoring appointments, designing and implementing program enhancements, paying stipends, administering other research participant expenses and insurance, managing domestic and foreign travel, analyzing and providing financial reports, developing and administering program goals and objectives, handling immigration status issues, and other tasks. Agencies’ selections of these services determined the amount that they paid in program support expenses for each of their appointments. According to DOE officials, the ORISE program consists of a set of distinct activities, or separate programs, that ORAU carries out on behalf of DOE and other sponsoring agency components. As a result, DOE considers responsibility for assessing the effectiveness of ORISE program activities to be dispersed among the sponsoring agencies, each of which may have separate objectives for sponsoring research participants. Sponsoring agency components we reviewed use questionnaires and other methods to assess how well the program is working. Responsibility for ensuring research participants do not perform inherently governmental functions is also dispersed among sponsoring agencies. However, documents provided by DOE, DOD, and HHS components to research participants, coordinators, and mentors contain varying levels of detail on the prohibition on nonfederal employees performing inherently governmental functions. Without detailed guidance, sponsoring agencies have limited assurance that the prohibition is being followed. In May 2013, the National Science and Technology Council, which coordinates executive branch science and technology policy, released its 5-year strategic plan for STEM education, which stated that federal agencies would focus on building and using evidence-based approaches to evaluate the federal investment in STEM education. DOE officials told us that, because the ORISE program consists of separate activities that ORAU carries out on behalf of DOE and other sponsoring agencies, these agencies choose whether to assess the effectiveness of ORISE program activities as part of their other investments in STEM education. As a result, other than periodically evaluating ORAU’s performance (with input from sponsoring agencies) under its contract to determine ORAU’s award fee, DOE does not assess the overall effectiveness of the activities that ORAU carries out under the ORISE program, according to a DOE official. For example, the official said DOE does not assess how ORISE program activities at other sponsoring agencies contribute to the ORISE program’s objective to enhance the quantity, quality, and diversity of the future scientific and engineering workforce and to increase the scientific and technical literacy of the U.S. citizenry. Sponsoring agency components establish their own objectives for sponsoring research participants and decide whether and how to assess the extent to which the ORISE program meets those objectives, according to DOE officials. Some but not all DOE, DOD, and HHS components have used questionnaires, and some components have used other methods to assess how well the ORISE program is working in the short term, such as over the course of a research participant’s appointment. In particular, some components use questionnaires developed with assistance from ORAU and administered to research participants, and sometimes to mentors. ORISE program coordinators and other officials at sponsoring agency components described other methods they use to assess the program, such as asking research participants about their experiences and monitoring the progress of research participants’ research projects, research participants’ publications and presentations related to their research, and the number of current agency employees who were past ORISE research participants. In addition, one of the program support functions that ORAU can offer to sponsoring agencies at the cost of the service is performing an assessment of ORISE program effectiveness. In response to a request from the sponsoring agency component, ORAU performed such an assessment for the Joint Prisoner of War/Missing in Action Accounting Command and issued a report in August 2014. The methods being used by the sponsoring components we reviewed assess how well the program is meeting the short-term needs of research participants and mentors. For example, some research participant questionnaires included questions about research participants’ satisfaction with their assignment, training, mentoring, stipends, and program administration. DOD mentor questionnaires include questions on reasons for renewing a research participant’s appointment and research participants’ skills and knowledge. A DOE Office of Science official told us that DOE is working with other agencies to develop methods for assessing the long-term outcomes of STEM education efforts, such as the ORISE program increasing the diversity of the STEM workforce. The official noted that, without such methods, they face challenges in assessing the long-term effectiveness of the ORISE program. For example, according to the official, such challenges include developing methods to track research participants over the course of their careers and determining the extent to which a participant’s degree of success in a STEM field is a result of the ORISE program as opposed to other educational experiences. In 2011, OMB’s Office of Federal Procurement Policy issued guidance to assist agency officers and employees in ensuring that only federal employees perform work that is inherently governmental or otherwise needs to be reserved to the public sector. This guidance directs agencies to develop and maintain internal procedures; take appropriate steps to help employees understand and meet their responsibilities; and periodically evaluate the effectiveness of their internal management controls for reserving work for federal employees. In accordance with OMB’s guidance, agencies that sponsor ORISE research participants are responsible for ensuring that research participants at their agencies do not perform inherently governmental functions. Documents we reviewed that are issued by DOE, DOD, and HHS, regarding research participants at their agencies, and that are used by sponsoring agency components’ coordinators, mentors, and research participants varied in level of detail on activities considered inherently governmental functions. For example, within HHS, ORISE program handbooks from FDA’s Center for Veterinary Medicine and Center of Drug Evaluation and Research included examples of activities research participants should not perform, such as serving as a drug, device, safety, or facilities reviewer. Similarly, within DOD, the research participant appointment letters used by the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics included detailed guidance, such as a statement that research participants should not accept policy, budget, or program management authority. In contrast, sample appointment letters that we reviewed used by HHS’s CDC and DOD’s U.S. Army Environmental Command stated only that the research participant will not enter into an employee/employer relationship with ORISE, DOE, or any other office or agency, and did not specifically cite the inherently governmental activities prohibition. The terms of appointment developed by ORAU and used by DOE, DOD, and HHS to make research participant appointments states only that the appointment is an educational experience and not a contract of employment. Statements of work agreed upon as part of interagency agreements between DOE and sponsoring agencies also varied in their level of detail about activities considered to be inherently governmental functions. For example, a statement of work for the CDC stated that ORISE research participation projects should not include activities reserved for federal employees, such as those involving budget or program management authority. In contrast, a statement of work for the National Institutes of Health did not include this level of detail, stating only that individuals selected for appointments do not become employees. DOE and other sponsoring agency officials noted that ORISE research participants are assigned to research projects that generally do not involve inherently governmental functions. A DOE Office of Science official said that the research focus of most ORISE appointments reduced the risk of those research participants performing inherently governmental functions. However, GAO found that some research participants’ projects involve activities that are closely associated with inherently governmental functions, such as participating in policy and strategic planning meetings, which may increase the risk of the participants performing inherently governmental functions. The DOE Office of Science official described how, in such cases, DOE provided more detailed briefings on inherently governmental functions for certain research participants, as well as briefings for their mentors. However, officials at other sponsoring agency components we interviewed did not describe providing such briefings as a standard practice for coordinators, mentors, or research participants. For example, the position description for a research participant in the DOD Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics included participating in policy and strategic planning meetings, but officials at this DOD component did not describe providing briefings on inherently governmental functions, increasing the importance of written guidance. Not having detailed guidance increases the risk that coordinators responsible for managing overall participation in the program and mentors responsible for directing research participants’ day-to-day activities may overlook the possibility of research participants engaging in inherently governmental functions, especially in cases where participants’ activities are closely associated with inherently governmental functions. Development of detailed guidance could help sponsoring agencies fulfill their responsibilities as identified in OMB’s Office of Federal Procurement Policy guidance on inherently governmental functions. By providing hands-on research experiences in government agencies for students, postgraduates, and faculty, the ORISE research participation program makes an important contribution to federal efforts to help prepare students and teachers for careers in STEM fields. Responsibility for administering the program is dispersed among agencies that sponsor research participants. In particular, agencies are responsible for ensuring that research participants do not perform inherently governmental functions—for example, by developing guidance and other documents for research participants, coordinators, and mentors. Having this responsibility allows agencies to tailor guidance on inherently governmental functions to the features of the ORISE program at their agencies, such as the types of projects to which research participants are assigned. However, the level of detail in documents currently used by DOE, DOD, and HHS varies, with some documents describing specific types of activities that are inherently governmental functions and others only providing general statements that research participants are not federal government employees. More detailed guidance can help ORISE coordinators, mentors, and research participants ensure that they are adhering to the prohibition on research participants as nonfederal government employees performing inherently government functions. We recommend that the Secretaries of Energy, Defense, and Health and Human Services develop detailed guidance to ensure that ORISE program coordinators, mentors, and research participants are fully informed of the prohibition on nonfederal employees performing inherently governmental functions. We provided a draft of this report to DOE, DOD, and HHS for their review and comment. In their written comments, reproduced in appendices III through V, DOE, DOD and HHS concurred with our recommendation. DOE and HHS also provided technical comments, which we incorporated as appropriate. In their written comments, DOE, DOD, and HHS described the measures they will take to implement our recommendation on inherently governmental functions. In particular, DOE stated that it plans to provide detailed guidance to all relevant parties involved in DOE-sponsored research participation activities administered through ORISE within 180 days, following consultation with relevant DOE offices. DOD stated detailed guidance will be developed to further ensure those connected with the ORISE program are fully informed of the prohibition on non- federal employees performing inherently governmental functions. HHS stated that they are developing an agency-wide policy, including a section on inherently governmental functions that will provide guidance to agency program coordinators, mentors, and research participants. In its letter and technical comments, DOE stated that the draft report did not reflect detailed discussions we had with DOE officials regarding inherently governmental functions. In addition, DOE stated that the draft report significantly understated the extent to which DOE communicates the prohibition of inherently governmental functions to sponsored participants and agency mentors. We do not believe our report understates DOE’s efforts. For example, our report includes a discussion of the detailed briefings that DOE Office of Science officials provide on inherently governmental functions to research participants selected for a program designed to expose the participants to federal policymaking. Other DOE, DOD, and HHS sponsoring agency components we interviewed did not describe a similar practice for their coordinators, mentors, or research participants. Our report’s discussion of these briefings, as well as of documents issued by DOE for the ORISE program, reflect the extent of communications on inherently governmental functions that DOE provided to us. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Energy, Defense, and Health and Human Services; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Examples of Subject Areas for ORISE Research Participant Projects in Fiscal Year 2014 Examples of subject areas Improving prevention and treatments of emerging foreign animal diseases, climate change impacts on forests, porcine epidemic diarrhea virus, and sensor networks on variable rate irrigation systems. Laser systems, nanomaterials, acoustics, neurobiology, additive manufacturing, civil engineering, cognitive modeling, intelligent sensors, exercise science, and visual analytics. Weather impacts on military projects, cognitive function and psychological performance of soldiers, health promotion and wellness, environmental medicine, improving military field equipment, and science and technology policy. Infectious disease or deployment health surveillance, clinical and health care epidemiology, optical coherence tomography, and aeromedical studies. Forensic sciences, human immunodeficiency virus/acquired immune deficiency syndrome prevention, public health and preventive medicine studies, and science and technology policy. Neutron scattering, fusion energy, efficiency of renewable energy sources, computational sciences, materials sciences, process controls of advanced power systems, gas sensors and high temperatures, improving extraction of earth elements, quantum computing, biofilms and biotechnology, advanced manufacturing (carbon fiber), climate change, and science and technology policy. Infectious diseases (e.g., influenza, sexually transmitted, food borne, vector borne, respiratory), chronic diseases (e.g., heart, obesity, cancer), environmental health, toxic substances, health statistics, and public health preparedness. Toxicology, food safety, drug evaluation and testing, biological therapeutics, tobacco products, blood products, medical devices, biotechnology products, translational sciences, women’s health, vaccines, cell and gene therapies, and regulatory science. Localization of proteins using molecular markers, gene regulatory effects in cancer, medical informatics, and central nervous system injuries. Public health economics, population based model testing, clinical care models, minority health, women’s health, tobacco prevention initiatives, national human immunodeficiency virus/acquired immune deficiency syndrome strategies, and geospatial analysis of underserved populations. Encryption for criminal databases, improving materials for coastal bridges, computer simulations of potential terrorist attacks or natural disasters, brain-like modeling systems, searchable databases of potential threats, human trafficking, and detecting and identifying explosive-related threats. Data analysis of housing and urban development impacts and value on communities. Department of the Interior Data collection and surveys related to conservation measures for fish and wildlife. Clean energy and climate change policy and analyses in the international economy, and building efficiencies. Climate change, software codes for aerial sampling systems, urban ecosystems, nanoparticles and surface coating, waste disposal, safety of water supply, and biomarkers for environmental contaminants. Juvenile prostitution and child abduction, causes of postmortem hair root banding, forensic applications of isotopes, crimes against adults, and identification of facial phenotypic markers. Includes Office of the Secretary, Office of Diversity Management and Equal Opportunity, National Geospatial-Intelligence Agency, Defense Threat Reduction Agency, Defense Prisoner of War/Missing in Action Accounting Command, and U.S. Southern Command. Includes Office of the Secretary and Health Resources and Services Administration, and Center for Medicare and Medicaid Innovation. The following tables detail federal agencies’ expenditures for and research participant appointments they sponsored as part of the Oak Ridge Institute for Science and Education (ORISE) research participation program for fiscal years 2010 through 2014. Table 1 identifies agencies’ total annual expenditures for their involvement in the ORISE program. Table 2 identifies the numbers of appointments at each agency for each year. Table 3 details each agency’s total expenditures for the ORISE program for fiscal years 2010 through 2014 by type of expense, including stipends, travel, other research participant expenses, program support and overhead, and federal administrative and security charges. In each table, the three agencies that account for the largest share of expenditures and appointments—the Department of Health and Human Services (HHS), the Department of Defense (DOD), and the Department of Energy (DOE)—are broken out into component agencies that sponsored research participants. In addition to the individual named above, Joseph Cook (Assistant Director), Sherri Doughty, Ellen Fried, Tobias Gillett, Kirsten Lauber, Gerald Leverich, Cynthia C. Norris, Stephanie Shipman, Kathryn Smith, Jeanette Soares, Sara Sullivan, and Thema Willette made key contributions to this report.
The ORISE research participation program seeks to enhance the future scientific and engineering workforce by providing students, postgraduates, and faculty with hands-on research experiences in federal agencies. The program is administered by a DOE contractor, and other agencies sponsor research participants via interagency agreements with DOE. Research participants engage in a variety of projects at DOE and other sponsoring agencies, but they are not considered federal government employees and thus are prohibited from performing inherently governmental functions. GAO was asked to review the ORISE research participation program. This report examines (1) program expenditures by all sponsoring agencies and (2) selected agencies' assessments of program effectiveness and their guidance on inherently governmental functions. GAO reviewed program data for fiscal years 2010-2014, the five most recent years for which data were available; examined program policies and guidance at DOE, DOD, and HHS, the three agencies that sponsored the most participants in fiscal year 2014; and interviewed officials at those three agencies. For fiscal years 2010 through 2014, the 11 departments and other federal agencies that sponsor research participants collectively expended $776.4 million for activities carried out through the Oak Ridge Institute for Science and Education (ORISE) research participation program (ORISE program). The three agencies with the highest expenditures for the program over the 5-year period were the Department of Energy (DOE), which oversees the contractor managing ORISE, and the Department of Defense (DOD) and Department of Health and Human Services (HHS), which both sponsor research participants via interagency agreements with DOE. Expenditures increased 73 percent over that period, and the number of appointments increased 42 percent. Stipends accounted for 82 percent of expenditures over that period, with the remainder going to other participant expenses, overhead and program support, and administrative and security charges. Agencies' expenditures per appointment varied for several reasons, such as differences in methods of setting stipends. Components within DOE, DOD, and HHS that sponsor research participants have performed some assessments of the short-term effectiveness of the ORISE program, but provide varying levels of detail to agencies' employees and research participants about inherently governmental functions—those functions that are so intimately related to the public interest as to require performance by federal government employees. Program effectiveness. Sponsoring agency components establish their own objectives for research participants and can decide whether and how to assess the extent to which the ORISE program meets those objectives. DOE, DOD, and HHS components have used questionnaires and other methods to assess how well the ORISE program meets the short-term needs of research participants and of the agency staff who oversee their activities. Agencies also face challenges in assessing the program's long-term effectiveness; for example, they do not have methods to track research participants over their careers to determine the extent to which participants' success is a result of the program. DOE has worked with other agencies on developing ways to address such challenges. Inherently governmental functions. Federal guidance directs agencies to develop internal procedures to ensure that only federal employees perform inherently governmental functions. DOE, DOD, and HHS sponsoring components' guidance for research participants that GAO reviewed had varying levels of detail on inherently governmental functions. Officials at these agencies said that research participants' projects generally do not involve inherently governmental functions, but GAO found that some research participants' projects involve activities that are closely associated with inherently governmental functions, such as participating in certain policy and strategic planning meetings, which may increase the risk of the participants performing inherently governmental functions. Development of detailed guidance could help sponsoring components reduce this risk and help officials better ensure adherence to the federal guidance on inherently governmental functions. GAO recommends that DOE, DOD, and HHS develop detailed guidance to inform their employees and research participants about inherently governmental functions. DOE, DOD, and HHS concurred with the recommendation and said they will take additional measures to provide detailed guidance to relevant parties.
Prior to the passage of ATSA, the screening of passengers and checked baggage had been performed by private screening companies under contract to the airlines. The Federal Aviation Administration (FAA) was responsible for ensuring compliance with screening regulations. With the passage of ATSA and the transfer of aviation security responsibilities to TSA, including passenger and checked baggage screening at airports, TSA assigned FSDs—the top-ranking TSA authorities responsible for security at the nation’s airports—to one or more commercial airports to oversee security activities. TSA has approximately 157 FSD positions at commercial airports nationwide to lead and coordinate TSA security activities. Although an FSD is responsible for security at each commercial airport, not every airport has an FSD dedicated solely to that airport. Most category X airports have an FSD responsible for that airport alone, while at other airports the FSD located at a hub airport has responsibility over one or more spoke airports of the same or smaller size. In addition to establishing TSA and giving it responsibility for passenger and checked baggage screening operations, ATSA also set forth specific enhancements to screening operations for TSA to implement, with deadlines for completing many of them. These requirements include assuming responsibility for screeners and screening operations at more than 400 commercial airports by November 19, 2002; establishing a basic screener training program composed of a minimum of 40 hours of classroom instruction and 60 hours of on-the-job training; conducting an annual proficiency review of all screeners; conducting operational testing of screeners; requiring remedial training for any screener who fails an operational test; and screening all checked baggage for explosives using explosives detection systems by December 31, 2002. As mandated by ATSA, TSA hired and deployed a TSO workforce to assume operational responsibility for conducting passenger and checked baggage screening. Passenger screening is a process by which authorized TSA personnel inspect individuals and property to deter and prevent the carriage of any unauthorized explosive, incendiary, weapon, or other dangerous item onboard an aircraft or into a sterile area. TSOs must inspect individuals for prohibited items at designated screening locations. The four passenger screening functions are (1) X-ray screening of property, (2) walk-through metal detector screening of individuals, (3) hand-wand or pat-down screening of individuals, and (4) physical search of property and trace detection for explosives. Checked baggage screening is a process by which authorized TSOs inspect checked baggage to deter, detect, and prevent the carriage of any unauthorized explosive, incendiary, or weapon onboard an aircraft. Checked baggage screening is accomplished through the use of explosive detection systems (EDS) or explosive trace detection (ETD) systems, and through the use of other means, such as manual searches, canine teams, and positive passenger bag match, when EDS and ETD systems are unavailable. In addition to establishing requirements for passenger and checked baggage screening, ATSA charged TSA with the responsibility for ensuring the security of air cargo, including, among other things, establishing security rules and regulations covering domestic and foreign passenger carriers that transport cargo, domestic and foreign all-cargo carriers, and domestic indirect air carriers—carriers that consolidate air cargo from multiple shippers and deliver it to air carriers to be transported; and overseeing implementation of air cargo security requirements by air carriers and indirect air carriers through compliance inspections. In general, TSA inspections are designed to ensure air carrier compliance with air cargo security requirements, while air carrier inspections focus on ensuring that cargo does not contain weapons, explosives, or stowaways. TSA is responsible for inspecting 285 passenger and all-cargo air carriers with about 2,800 cargo facilities nationwide, as well as 3,800 indirect air carriers with about 10,000 domestic locations. In conducting inspections, TSA inspectors review documentation, interview carrier personnel, directly observe air cargo operations, or conduct tests to determine whether air carriers and indirect air carriers are in compliance with air cargo security requirements. In 2004, an estimated 23 billion pounds of air cargo was transported within the United States, with about a quarter of this amount transported on passenger aircraft. Recently, DHS reported that most cargo on passenger aircraft is not physically inspected. ATSA also granted TSA the responsibility for overseeing U.S. airport operators’ efforts to maintain and improve the security of commercial airport perimeters, access controls, and airport workers. While airport operators, not TSA, retain direct day-to-day operational responsibilities for these areas of security, ATSA directs TSA to improve the security of airport perimeters and the access controls leading to secured airport areas, as well as take measures to reduce the security risks posed by airport workers. Each airport’s security program, which must be approved by TSA, outlines the security policies, procedures, and systems the airport intends to use in order to comply with TSA security requirements. FSDs oversee the implementation of the security requirements at airports. Of TSA’s 950 aviation security inspectors located at airports throughout the United States, 750 are considered generalists who conduct a variety of aviation security inspections, and 200 are dedicated to conducting air cargo inspections. The FSD at each airport is responsible for determining the scope and emphasis of the inspections, as well as discretion for how to assign local inspection staff. TSA provides local airport FSDs and inspectors with goals for the number of inspections to be conducted per quarter. In recent years, TSA has taken numerous actions related to the deployment, training, and performance of their aviation security workforce. TSA has, for example, taken action to support the authority of FSDs at airports, though additional clarification of their roles is needed. TSA also has improved the management and deployment of its TSO workforce with the use of a formal staffing model, though hiring and deployment challenges remain. TSA has also strengthened TSO training, and implemented various approaches to measuring TSO performance related to passenger and baggage screening activities. In recent years, TSA has taken steps to ensure that FSDs, as the ranking TSA authorities at airports, coordinated their security actions with various airport stakeholders, and had sufficient authority to carry out their responsibilities. In September 2005, we reported on the roles and responsibilities of FSDs and other issues related to the position, including the extent to which they formed and facilitated partnerships with airport stakeholders. At that time, we reported that the FSDs and most stakeholders at the seven airports we visited had developed partnerships that were generally working well. TSA recognized that building and maintaining partnerships with airport stakeholders was essential to FSDs’ success in addressing security as well as maintaining an appropriate level of customer service. To that end, TSA established general guidance for FSDs to follow in building stakeholder partnerships, but left it to the FSDs to determine how best to achieve effective partnerships at their respective airports. As a part of their security responsibilities, FSDs must coordinate closely with airport stakeholders—airport and air carrier officials, local law enforcement, and emergency response officials—to ensure that airports are adequately protected and prepared in the event of a terrorist attack. FSDs’ success in sustaining and ensuring the effectiveness of aviation security efforts is dependent on their ability to develop and maintain effective partnerships with these stakeholders. FSDs need to partner with law enforcement stakeholders, for example, because they do not have a law enforcement body of their own to respond to security incidents. Partnerships can be of mutual benefit to FSDs and airport stakeholders and can enhance customer service. For example, FSDs rely on air carrier data on the number of passengers transiting through checkpoints to appropriately schedule screeners, and air carriers rely on the FSD to provide an efficient screening process to minimize wait times for passengers. At the airports we visited, FSDs and stakeholders cited several ways FSDs maintained partnerships, including being accessible to their stakeholders to help resolve problems and meeting with stakeholders to discuss how to implement new security policies. In addition, a variety of communication and coordination efforts were in place at the airports we visited, and many of these efforts existed before TSA assigned FSDs to airports. Formal mechanisms included security and general airport operations meetings, incident debriefings, and training exercises to help ensure a coordinated response in the event of a security incident. We also found that in response to concerns over FSD authority in responding to airport-specific security needs, in 2004, TSA made a number of changes to better support and empower the FSD. These changes included establishing a local hiring initiative that vested more hiring authority with the FSDs to address airport staffing needs, providing flexibility to offer training locally to screeners, increasing authority to address performance and conduct problems, relocating five area director positions from the headquarters to the field in conjunction with establishing a report group to provide operational support and a communication link with headquarters, and establishing a mentoring program for newly appointed FSDs or their deputies. Most of the 25 FSDs we interviewed generally viewed these changes favorably. For example, most were satisfied with TSA’s new local hiring process that provided more options for FSDs to be involved with hiring screeners, and most said that the new process was better than the more centralized hiring process it replaced. TSA officials concluded, among other things, that TSO candidates selected at airports where the FSD and staff were conducting the hiring process were more selective in accepting offers—leading to lower attrition—because they had more knowledge of what the job would entail than contractors did when they handled the hiring process. In addition, most of the FSDs we interviewed also saw value in the headquarters group TSA established to provide operational support to the field and a communication link among headquarters, field- based area directors, and FSDs. One area where we noted room for improvement at the FSD level was in how the FSD’s authority has been defined. In September 2005, we reported that TSA had developed guidance that describes the many roles and responsibilities of FSDs, most of which is associated with securing commercial airports from terrorist threats. However, while the guidance clearly defined FSD roles and responsibilities, TSA’s primary document outlining FSDs’ authority was outdated and lacked clarity regarding FSD authority relative to that of other airport stakeholders with whom FSDs must coordinate closely to help ensure the effectiveness of aviation security efforts. The absence of a clear understanding of the authority of the position had reportedly resulted in confusion during past security incidents and had raised concerns among some stakeholders at both the national and airport levels about possible ambiguity regarding FSDs’ authority during incidents. Accordingly, we recommended that steps be taken to update TSA’s Delegation of Authority to FSDs to clearly reflect the authority of FSDs relative to that of airport stakeholders during security incidents and communicate the authority of the position, as warranted, to the FSDs and all airport stakeholders. Such action would benefit FSDs by further enabling them to communicate and share consistent information about their authority with their staff and airport stakeholders, including law enforcement agencies. In commenting on our recommendation, DHS stated that a new restatement of the Delegation Order had been drafted by a working group composed of FSDs from the FSD Advisory Council and relevant stakeholders and is being internally coordinated for comment and clearance. To accomplish its security mission, TSA needs a sufficient number of passenger and checked baggage TSOs trained and certified in the latest screening procedures and technology. We reported in February 2004 that staffing shortages and TSA’s hiring process had hindered the ability of some FSDs to provide sufficient resources to staff screening checkpoints and oversee screening operations at their checkpoints without using additional measures such as overtime. TSA has acknowledged that its initial staffing efforts created imbalances in the screener workforce and has since been taking steps to address these imbalances over the past 2 years, by, among other things, meeting a congressional requirement to develop a staffing model for TSOs. Specifically, the Intelligence Reform and Terrorism Prevention Act of 2004 required TSA to develop and submit to Congress standards for determining the aviation security staffing for all airports at which screening is required. The act also directed GAO to review these standards, which we are doing. These staffing standards are to provide for necessary levels of airport security, while also ensuring that security-related delays experienced by airline passengers are minimized. In June 2005, TSA submitted its report on aviation security staffing standards to Congress. Known as the Screening Allocation Model (SAM), these standards are intended to provide an objective measure for determining TSO airport staffing levels, while staying within the congressionally mandated limit of 45,000 FTE screeners. Whereas TSA’s prior staffing model was demand-driven based on flight and passenger data, the SAM model analyzes not only demand data but also data on the flow of passenger and baggage through the airport and the availability of the workforce. In determining the appropriate TSO staffing levels, the SAM first considers the workload demands unique to each individual airport—including flight schedules, load factors and connecting flights, and number of passenger bags. These demand inputs are then processed against certain assumptions about the processing of passengers and baggage—including expected passenger and baggage processing rates, required staffing for passenger lanes and baggage equipment, and equipment alarm rates. Using these and various other data, the SAM determines the daily workforce requirements and calculates a work schedule for each airport. The schedule identifies a recommended mix of full-time and part-time staff and a total number of TSO full-time equivalents (FTE) needed to staff the airport, consistent with a goal of 10 minutes maximum wait time for processing passengers and baggage. For fiscal year 2006, the SAM model estimated a requirement of 42,170 TSO FTEs for all airports nationwide. In order to stay within a 43,000 TSO FTE budgetary limit for fiscal year 2006, TSA officials reduced the number of FTEs allocated to airports to 42,056, a level that allowed it to fund the 615 TSO FTEs in the National Screener Force—a force composed of TSOs who provide screening support to all airports------and to maintain a contingency of 329 TSO FTEs in reserve to meet unanticipated demands, such as a new air carrier coming on line at an airport. As of January 2006, there were 37,501 full-time TSOs and 5,782 part-time TSOs on board nationwide, representing an annualized rate of 41,085 TSO FTEs. According to TSA headquarters officials, the SAM can be adjusted to account for the uniqueness of particular airport security checkpoints and airline traffic patterns. Further, it is up to the FSDs to ensure that all of the data elements and assumptions are accurate for their airports, and to bring to TSA’s attention any factors that should be reviewed to determine if changes to the SAM are appropriate. The President’s fiscal year 2007 budget requests a total of 45,121 FTEs under the Passenger and Baggage TSO personnel compensation and benefits categories. As part of our ongoing review of the SAM model, we have identified several preliminary concerns about TSA’s efforts to address its staffing imbalances and ensure appropriate coverage at airport passenger and checked baggage screening checkpoints. At the five airports we visited, FSD staff raised concerns about the SAM assumptions as they related to their particular airports. Among other things, they noted that the recommendation for 20 percent part-time TSO workforce—measured in terms of FTEs—often could not be reached, the expected processing rates for passenger and baggage screening were not being realized, non- passenger screening at large airports was higher than assumed, and the number of TSO FTEs needed per checkpoint lane and per baggage screening machine was not sufficient for peak periods. Regarding the SAM assumption of a 20 percent part-time TSO FTE level across all airports, FSD staff we visited stated that the 20 percent goal has been difficult to achieve because of, among other things, economic conditions leading to competition for part-time workers, remote airport locations coupled with a lack of mass transit, TSO base pay that has not changed since fiscal year 2002, and part-time workers’ desire to convert to full-time status. According to TSA headquarters officials, while the nationwide annual TSO attrition rate is about 23 percent (compared to a rate of 14 percent reported in February 2004), it is over 50 percent for part-time TSOs. TSA has struggled with hiring part-time TSOs since it began actively recruiting them in the summer of 2003. In February 2004, we reported that FSDs at several of the airports we visited stated that they experienced difficulty in attracting needed part-time TSOs, which they believed to be due to many of the same factors, such as low pay and benefits, undesirable hours, the location of their airport, the lack of accessible and affordable parking or public transportation, and the high cost of living in the areas surrounding some airports. These FSDs stated that very few full-time TSOs were interested in converting to part-time status—a condition that still exists— and TSA officials stated that attrition rates for part-time TSOs were considerably higher than those for full-time TSOs. At two of the five airports we visited as part of our ongoing review of the SAM model, FSD staff told us that they had not been able to hire up to their authorized staffing levels. In February 2004, we reported that many of the FSDs we interviewed expressed concern that TSA’s hiring process was not responsive to their needs and hindered their ability to reach their authorized staffing levels and adequately staff screening checkpoints. Specifically, FSDs expressed concern with the lack of a continuous hiring process to backfill screeners lost through attrition, and their lack of authority to conduct hiring on an as-needed basis. We reported that TSA was taking steps to make the hiring process more responsive to FSDs’ needs. Since then, TSA has provided FSDs with more input into the hiring process in an effort to streamline the process and enable FSDs to more quickly meet their staffing needs. During our five airport visits, some FSD staff we interviewed also cited another limitation of the SAM—specifically, that the model does not account for screeners who are performing administrative or other duties. The officials also noted that, because they are not authorized to hire a sufficient number of mission support staff, TSOs are being routinely used—in some cases full time—to carry out non-screening and administrative duties, including supporting payroll, scheduling, uniform supplies, legal support, logistics, and operations center activities. At the five airports we visited in January and February 2006, out of a total of 2,572 TSO full time equivalents (FTE) on-board at those airports, roughly 136 FTEs (just over five percent) were being used for administrative duties. FSD staff stated that some of these TSOs are being used on a part- time basis, while others are used on a full-time basis. The use of TSOs in these support functions could adversely affect the ability of FSDs to adequately staff their screening checkpoints. To compensate for screener shortages and to enable operational flexibility to respond to changes in risk and threat, in October 2003, TSA established a National Screening Force (formerly known as the Mobile Screening Force established in November 2002) to provide screening support to all airports in times of emergency, seasonal demands, or under other special circumstances that require a greater number of screeners than regularly available to FSDs. In February 2004, we reported that the National Screening Force consisted of over 700 full-time passenger and baggage TSOs. TSA officials stated that while these screeners have a home airport to which they are assigned, they travel to airports in need of screening staff approximately 70 percent of the year. TSA budgeted from appropriations received in fiscal year 2006 for 615 FTEs for the National Screening Force. The President’s fiscal year 2007 budget request includes $35 million for operational expenses of the National Screening Force (not including salaries and benefits of force members). According to the budget request, in fiscal year 2007, the National Screening Force will generally be deployed only to those airports experiencing significant staffing shortfalls associated with increased seasonal traffic or when a special event, such as a Super Bowl or a large national conference, occurs requiring an immediate influx of additional TSO support. At one category X airport we recently visited, the FSD stated that because of challenges in hiring and retaining TSOs for this airport, he has had to rely on 59 members of the National Screening Force deployed to his airport, and had been relying on this force since 2004. The President’s fiscal year 2007 budget request states that TSA will continue to review methods for reducing costs associated with this force, including ensuring that each airport has a sufficient staffing program in place to address short-term needs. In the President’s fiscal year 2007 budget request, TSA identified several additional initiatives under way to address the management of the TSO workforce. These efforts include attempts to reduce attrition by creating a performance-based pay system, and establishing retention incentives to include performance bonuses, retention allowances, college credit reimbursement and flexible staffing. TSA also reported efforts to enhance opportunities for career advancement within the TSO job category, reducing on-the-job injuries by reengineering baggage screening areas, and deploying a national nurse care management program at 21 airports to assist TSOs in returning to work in a shorter period of time. Since we reported on TSO training in September 2003, TSA has taken a number of actions designed to strengthen training available to the TSO workforce as part of its efforts to enhance the performance of TSOs. In September 2003, we reported that TSA had not fully developed or deployed a recurrent training program for passenger TSOs. At that time, little training was available to TSOs once they completed their basic TSO training. Since then, TSA has expanded training available to the TSO workforce, such as introducing an Online Learning Center that makes self- guided courses available over TSA’s intranet and the Internet and expanding training available to supervisory TSOs. TSA also established a recurrent training requirement of 3 hours per week, averaged over a quarter, and provided FSDs with additional tools to facilitate and enhance TSO training, including at least one modular bomb set kit—containing components of an improvised explosive device (IED)—and at least one weapons training kit. TSA has also instituted a program called Threat in the Spotlight that, based on intelligence TSA receives, provides screeners with the latest in threat information regarding terrorist attempts to get threat objects past screening checkpoints. Additionally, in December 2005, TSA reported completing enhanced explosives detection training for over 18,000 TSOs. This training included both classroom and hands-on experiences, and focused particularly on identifying X-ray images of IED component parts, not just a completely assembled bomb. TSA plans for the remaining TSO workforce to receive this training by June 2006 through the Online Learning Center or other delivery methods. TSA also has developed new training curriculums to support new screening approaches. For example, TSA recently developed a training curriculum for TSOs in behavior observation and analysis at the checkpoint to identify passengers exhibiting behaviors indicative of stress, fear, or deception. However, as we reported in May 2005, insufficient TSO staffing and a lack of high-speed Internet/intranet connectivity to access the Online Learning Center have made it difficult for all TSOs screeners at many airports to receive required training and has limited TSO access to TSA training tools. As previously discussed, TSA is taking steps to address the TSO staffing challenges. However, it is too soon to determine whether TSA’s efforts will address TSA’s ability to provide required training while maintaining adequate coverage for screening operations. In terms of access to the Online Learning Center, TSA plans to complete the deployment of high-speed Internet/intranet connectivity to airports during fiscal year 2007. TSA established its Online Learning Center to provide passenger and baggage screeners with online, high-speed access to training courses. However, effective use of the Online Learning Center requires high-speed Internet/intranet access, which TSA has not been able to provide to all airports. In May 2005, we reported that as of October 2004, about 45 percent of the TSO workforce did not have high speed Internet/intranet access to the Online Learning Center. The President’s fiscal year 2007 budget request reports that approximately 220 of the more than 400 airport and field locations have full information technology infrastructure installation, to include high-speed network connectivity, while the rest of the airports operate with dial-up access to TSA systems. According to the budget request, TSA will use $120 million in fiscal year 2006 to deploy high-speed connectivity to all category X and I airports and preliminary high-speed connectivity to all category II, III, and IV airports. The budget request includes a request for a total of $90 million to support this effort in fiscal year 2007, of which $54 million is needed to complete the deployment of high-speed connectivity at category II, III, and IV airports. TSA has strengthened its efforts to measure the performance of the various components of the passenger and checked baggage screening systems—people, processes, and technology—but results of covert testing identified that weaknesses and vulnerabilities continue to exist. In November 2003, we reported on the need for TSA to strengthen its efforts to measure the performance of its screening functions. At that time, TSA had collected limited data on the effectiveness of its aviation security initiatives, to include screening functions. Specifically, limited covert (undercover, unannounced) testing had been performed, the TIP system used to aid TSOs in identifying threat objects within baggage was not fully operational at passenger screening checkpoints and was not available for checked baggage screening systems, and TSA had not fully implemented a congressionally mandated annual TSO proficiency review. Since then, TSA has implemented and strengthened efforts to collect performance data in each of these areas. In the area of covert testing, TSA headquarters increased the amount of passenger and checked baggage screening covert tests it performs and recently changed its approach to covert testing to focus its resources on catastrophic threats—threats that can take down an airplane or blow up an airplane. TSA’s Office of Inspections (OI) (formerly the Office of Internal Affairs and Program Review, or OIAPR) conducts unannounced covert tests of TSOs to assess their ability to detect threat objects and to adhere to TSA-approved procedures. These tests, in which undercover OI inspectors attempt to pass threat objects through passenger screening checkpoints and in checked baggage, are designed to measure vulnerabilities in passenger and checked baggage screening systems and to identify systematic problems affecting performance of TSOs in the areas of training, procedures, and technology. OI, which began covert testing in September 2002, conducted 836 tests in fiscal year 2003 and 2,369 tests in fiscal year 2004 using its staff of 183 full-time-equivalents. In reporting its covert testing results, OI makes recommendations to TSA leadership that address deficiencies identified during testing and are intended to improve screening effectiveness. As of December 2005, OI had issued 29 reports to management on the results of its checkpoint and checked baggage covert testing. In total, the reports include 19 distinct recommendations related to passenger and checked baggage screening. Of these 19 recommendations, 11 relate to screener training. In September 2005, OI began implementing a revamped testing process that included a more risk-based approach and focused its resources on catastrophic threats. OI officials stated that they will continue testing. However, TSA leadership is reviewing the results of the revised testing, and final decisions regarding the structure, content, and frequency of future tests have not yet been made. Our analysis of TSA’s covert testing results for tests conducted between September 2002 and September 2005 identified that overall, weaknesses existed in the ability of screeners to detect threat objects on passengers, in their carry-on bags, and in checked baggage. Covert testing results in this analysis cannot be generalized either to the airports where the tests were conducted or to airports nationwide. In February 2004, TSA provided protocols to help FSDs conduct their own covert testing of local airport passenger screening activities—a practice that TSA had previously prohibited. Between May 2004 and April 2005, FSDs conducted a total of 17,954 local covert tests at 350 airports; as of February 2006, TSA reported that FSDs had conducted a total of 48,826 local covert tests. In February 2005, TSA released a general procedures document for local covert testing at checked baggage screening locations. Between March 2005 and September 2005, 1,370 local tests of EDS screening were conducted at 71 airports. TSA headquarters officials stated that a key challenge FSDs face in conducting local testing is the lack of available federal staff to conduct the testing, particularly at smaller airports. In May 2005, we reported that TSA officials stated that they had not yet begun to use data from local covert testing to identify training and performance needs because of difficulties in ensuring that local covert testing is implemented consistently nationwide. TSA officials stated in March 2006, that the data are available for FSDs to use to identify training needs and levels of TSO performance. Covert testing is one method TSA uses to measure the security effectiveness of passenger and checked baggage screening procedures and technologies in the operating environment in addition to other TSA measures that assess the performance of passenger and checked baggage TSOs. One other source of information on TSO performance in detecting threat objects is the results from the TIP system. TIP is designed to test passenger screeners’ detection capabilities by projecting threat images, including images of guns, knives, and explosives, onto bags as they are screened during actual operations. TSOs are responsible for identifying the threat image and calling for the bag to be searched. Once prompted, TIP identifies to the screener whether the threat is real and then records the TSO’s performance in a database that could be analyzed for performance trends. TIP threat detection results in conjunction with OI covert test results and local testing are intended to assist TSA in identifying specific training and performance improvement efforts. In May 2005, we reported that in October 2003 TSA reactivated TIP as planned with an expanded library of 2,400 images at all but one of the more than 1,800 checkpoint lanes nationwide. In December 2005, TSA reported that it has further expanded the image library to include additional images of IEDs and IED components as part of its effort to improve TSOs’ detection of explosives. Additionally, the President’s fiscal year 2007 budget request states that TSA plans to maximize the training benefits of the TIP system by tailoring TIP sessions to address individual TSO weaknesses revealed in user performance data. For example, if a TSO has particular difficulty identifying IEDs, the TIP would trigger the projection of a higher proportion of simulated IEDs while that TSO was operating the machine under standard circumstances. Despite these improvements, TIP is not yet available for checked baggage screening. In April 2004, we reported that TSA officials stated that they were working to resolve technical challenges associated with using TIP for checked baggage screening on explosives detection system (EDS) machines and have started EDS TIP image development. However, in December 2004, TSA officials stated that because of severe budget reductions, TSA will be unable to begin implementing a TIP program for checked baggage in fiscal year 2005. Officials did not specify when such a program might begin. Another measure of TSO performance is the results of annual recertification testing. ATSA requires that each TSO receive an annual proficiency review to ensure he or she continues to meet all qualifications and standards required to perform the screening function. To meet this requirement, TSA established a recertification program. The first recertification program—which was conducted during the period October 2003 through March 2004—was composed of two assessment components, one of TSOs’ performance and the other of TSOs’ knowledge and skills. During the performance assessment component of the recertification program, TSOs are rated on both organizational and individual goals, such as maintaining the nation’s air security, vigilantly carrying out duties with utmost attention to tasks that will prevent security threats, and demonstrating the highest levels of courtesy to travelers to maximize their levels of satisfaction with screening services. The knowledge and skills assessment component consists of three modules: (1) knowledge of standard operating procedures, (2) image recognition, and (3) practical demonstration of skills. Across all airports, TSOs performed well on the recertification testing for the first 2 years the program was in place, with about 1 percent of TSOs subject to recertification failing to complete this requirement. In both years, TSOs faced the greatest difficulty on their first attempt to pass the practical demonstration of skills module—a hands-on simulated work sample used to evaluate a screener’s knowledge, skill, and ability when performing specific screener tasks along with the ability to provide customer service. According to TSA officials, at the completion of recertification at an airport, TSA management has access to reports at both the individual TSO and airport level, which identify the specific areas that were missed during testing. National level reports are also available that isolate areas that need improvement and can be targeted in basic and recurrent training. In fiscal year 2004, TSA established a performance measure for the recertification program. During the first year of recertification testing, dual-function TSOs who were actively working as both passenger and checked baggage TSOs were required to take only the recertification test for passenger TSOs. They were therefore not required to take the recertification testing modules required for checked baggage, even though they worked in that capacity. TSA’s second annual recertification testing, which began in October 2004, included components for dual-function TSOs, but did not include an image recognition module for checked baggage TSOs—which would include dual-function screeners performing checked baggage screening. TSA officials stated that a decision was made to not include an image recognition module for checked baggage TSOs during this cycle because not all checked baggage TSOs would have completed training on the onscreen resolution protocol by the time recertification testing was conducted at their airports. In October 2005, TSA released guidance for screener recertification that included an image recognition module for checked baggage and dual-function screeners trained in the onscreen alarm resolution protocol. In addition to enhancing its efforts to measure the performance of TSOs, TSA also has developed two performance indexes to measure the effectiveness of the passenger and checked baggage screening systems. These indexes measure overall performance through a composite of indicators and are derived by combining specific performance measures relating to passenger and checked baggage screening, respectively. Such measures can be useful in identifying shortfalls that might be addressed by initiatives to enhance the workforce, such as providing special training. Specifically, these indexes measure the effectiveness of the screening systems through machine probability of detection and covert testing results; efficiency through a calculation of dollars spent per passenger or bag screened; and customer satisfaction through a national poll, customer surveys, and customer complaints at both airports and TSA’s national call center. We reported in May 2005 that the screening performance indexes developed by TSA can be a useful analysis tool, but without targets for each component of the index, TSA will have difficulty performing meaningful analyses of the parts that make up the index. For example, without performance targets for covert testing, TSA will not have identified a desired level of performance related to screener detection of threat objects. Performance targets for covert testing would enable TSA to focus its improvement efforts on areas determined to be most critical, as 100 percent detection capability may not be attainable. In January 2005, TSA officials stated that the agency planned to track the performance of individual index components and establish performance targets against which to measure these components. Since then, TSA has finalized targets for the indexes, including targets for passenger and checked baggage covert testing. TSA has taken steps to strengthen oversight for key areas of aviation security, including domestic air cargo security operations conducted by air carriers, and airport perimeter security operations and access controls carried out by airport operators. For air cargo, TSA has increased the number of inspectors used to assess air carrier and indirect air carrier compliance with security requirements, and has incorporated elements of risk-based decision making to guide air cargo security needs. As of October 2005, however, TSA had not developed performance measures to determine to what extent air carriers and indirect air carriers are complying with air cargo security requirements, limiting TSA’s ability to effectively target its workforce for future inspections and fulfill its oversight responsibilities. On airport premises, TSA had, at the time of our 2004 review, begun evaluating the security of airport perimeters and the controls that limit access into secured airport areas, but had not completed actions to ensure that all airport workers employed in these areas were vetted prior to hiring and then trained. We reported in October 2005 that TSA had significantly increased the number of domestic air cargo inspections conducted of air carrier and indirect air carrier compliance with security requirements. We noted, however, that TSA had not developed performance measures to determine to what extent air carriers and indirect air carriers were complying with security requirements, and had not analyzed the results of inspections to systematically target future inspections on those entities that pose a higher security risk to the domestic air cargo system. Without these performance measures and systematic analyses, TSA will be limited in its ability to effectively target its workforce for future inspections and fulfill its oversight responsibilities for this essential area of aviation security. We also reported on other actions that TSA had taken to focus limited resources on the most critical security needs. Our analysis of TSA’s inspection records showed that between January 1, 2003, and January 31, 2005, TSA conducted 36,635 cargo inspections of air carriers and indirect air carriers and found 4,343 violations. Although TSA had compiled this information, the agency had not determined what constitutes an acceptable level of performance or compared air carriers’ and indirect air carriers’ performance against this standard. Without measures to determine an acceptable level of compliance with air cargo security requirements, TSA cannot assess the performance of individual air carriers or indirect air carriers against national performance averages or goals that would allow TSA to target inspections and other actions on those that fall below acceptable levels of compliance. According to TSA officials, the agency was working on developing short-term and long-term outcome measures for air cargo security, but they did not provide a timetable for when this effort would be completed. In addition, TSA had taken initial steps to compile information on the results of its compliance inspections of air carriers and indirect air carriers and identify the most frequent types of violations found. For example, from January 1, 2003, to January 31, 2005, TSA identified violations committed by air carriers and indirect air carriers involving noncompliance with air cargo security requirements in several areas— such as cargo acceptance procedures, access control to cargo facilities, and physical cargo inspections—that TSA had determined to be high-risk because they would pose the greatest risk to the safety and security of air cargo operations. TSA identified indirect air carriers’ failure to comply with their own security programs as the area with the most violations, which according to TSA officials is due, in part, to indirect air carriers’ unfamiliarity with air cargo security requirements. While TSA had identified frequently occurring violations, it had not yet determined the specific area of violation for a large number of inspections. In addition, TSA could not identify how many of its 36,635 inspections covered each air cargo security requirement. As a result, TSA could not determine the compliance rate for each specific area inspected. Without complete information on the specific air cargo security requirements that air carriers and indirect air carriers violated, as well as the number of times each topic area was inspected, TSA was limited in its ability to determine the compliance rates for specific air cargo security requirements and effectively target future inspections for air cargo security requirements that were most frequently violated and the air carriers and indirect air carriers that violate them. In June 2005, TSA officials informed us that in the future they intended to compile information on the number of instances in which specific air cargo security requirements were inspected. In addition, while TSA compiled information on the results of its compliance inspections, the agency had not yet systematically analyzed these results to target future inspections on security requirements and entities that pose a higher risk. Analyzing inspection results would be consistent with our internal control standards calling for comparisons of data to identify relationships that could form the basis for corrective actions, if necessary. TSA officials and the agency’s fiscal year 2005 annual domestic inspection and assessment plan identified the need for such analyses. According to TSA officials, the agency had recently hired one staff person to begin analyzing inspection data. In June 2005, TSA officials also stated that the agency was working to revise its Performance and Results Information System database to allow for more accurate recording of inspection violations. However, the agency had not systematically analyzed the results of its inspections to target future inspections of those entities that pose an increased security risk. Without an analysis of the results of its inspections, TSA had a limited basis to determine how best to allocate its inspection resources. Further, analyzing key program performance data and using the results of this analysis to effectively allocate resources are consistent with elements of a risk management approach. Specifically, analyzing the results of compliance inspection data could help focus limited inspection resources on those entities posing a higher security risk. Such targeting is important because TSA may not have adequate resources to inspect all air carriers and indirect air carriers on a regular basis. For example, as we reported in October 2005, according to TSA inspection data for the period from January 1, 2003, to January 31, 2005, compliance inspections identified a greater incidence of violations by indirect air carriers than by air carriers. In addition, the percentage of inspections of air carriers that did not identify a violation of air cargo security requirements was significantly higher than that for indirect air carriers. According to TSA officials, the agency was taking steps to enhance its ability to conduct compliance inspections of indirect air carriers. To further target its inspections, TSA was conducting special emphasis assessments, which include testing to identify air cargo security weaknesses. On the basis of its review of compliance inspection results for the period of January 2003 to January 2005, TSA identified 25 indirect air carriers and 11 air carriers with a history of violations related to air cargo security requirements. TSA officials stated that the agency began conducting tests on these air carriers and indirect air carriers in April 2005. TSA officials stated that the agency planned to conduct additional tests. However, TSA officials stated that the agency had not yet determined how it will use the results of its testing program to help interpret the results from its other compliance inspection efforts. TSA had also not analyzed inspection results to identify additional targets for future testing. Such analysis could include focusing compliance testing efforts on air carriers and indirect air carriers with a history of air cargo security violations related to high-risk areas. TSA has made efforts to incorporate risk-based decision making into securing air cargo, but has not conducted assessments of air cargo vulnerabilities or critical assets (cargo facilities and aircraft)—two crucial elements of a risk-based management approach without which TSA may not be able to appropriately focus its resources on the most critical security needs. TSA also completed an Air Cargo Strategic Plan in November 2003 that outlined a threat-based risk management approach and identified strategic objectives and priority actions for enhancing air cargo security. Then, in November 2004, TSA issued a proposed air cargo security rule to enhance and improve the security of air cargo transportation. When finalized, TSA intends for this rule to implement most of the objectives set forth in the strategic plan. TSA had also not completed a methodology for assessing the vulnerability and criticality of air cargo assets, or established a schedule for conducting such assessments because of competing agency efforts to address other areas of aviation security. TSA had established a centralized Known Shipper database to streamline the process by which shippers (individuals and businesses) are made known to carriers with whom they conduct business. However, the information on the universe of shippers was incomplete because shipper participation was not mandatory and the data had not been thoroughly reviewed. TSA estimated that the database represented less than a third of the total population of known shippers. Further, TSA had not taken steps to identify shippers who may pose a security threat, in part because TSA had incomplete information on known shippers. TSA was attempting to address this limitation by its November 2004 proposed air cargo security rule which would make the Known Shipper database mandatory. This would require air carriers and indirect air carriers to submit information on their known shippers to TSA’s Known Shipper database. Finally, TSA plans to take further steps to identify those shippers who may pose a security risk. In addition, TSA established a requirement for random inspection of air cargo to address threats to the nation’s aviation transportation system and to reflect the agency’s position that inspecting 100 percent of air cargo was not technologically feasible and would be potentially disruptive to the flow of air commerce. However, this requirement, which was revised in 2005 to increase the percentage of inspections required, contained exemptions based on the nature and size of cargo that may leave the air cargo system vulnerable to terrorist attack. TSA’s plans for enhancing air cargo security included implementing a system for targeting elevated risk cargo for inspection. Although the agency acknowledged that the successful development of this system was contingent upon having complete, accurate, and current targeting information, the agency had not yet completed efforts to ensure information that will be used by the system is reliable. Further, through its proposed air cargo security rule, TSA planned to require air carriers and indirect air carriers to secure air cargo facilities, screen all individual persons boarding all-cargo aircraft, and conduct security checks on air cargo workers. In commenting on the proposed air cargo security rule, industry stakeholders representing air carriers, indirect air carriers and airport authorities stated that several of the proposals, including those mentioned above, may be costly and difficult to implement, and that TSA may have underestimated the costs associated with implementing these proposed measures. Our analysis of TSA’s estimate also suggested that it may have been an underestimate. TSA stated that it plans to reassess its cost estimates before issuing its final air cargo security rule. In October 2005, we made several recommendations to assist TSA in strengthening the security of the domestic air cargo transportation system. These recommendations included (1) developing a methodology and schedule for completing assessments of air cargo vulnerabilities and critical assets; (2) reexamining the rationale for existing air cargo inspection exemptions; (3) developing measures to gauge air carrier and indirect air carrier compliance with air cargo security requirements; (4) developing a plan for systematically analyzing and using the results of air cargo compliance inspections to target future inspections and identify system wide corrective actions; (5) assessing the effectiveness of enforcement actions in ensuring air carrier and indirect air carrier compliance with air cargo security requirements; (6) and ensuring that the data to be used in the Freight Assessment System are complete, accurate, and current. DHS agreed with our recommendations. We currently have an ongoing review assessing the security of air cargo entering the United States from foreign countries. As discussed previously, domestic commercial airport authorities have primary responsibility for securing airport perimeters and restricted areas, whereas TSA conducts regulatory inspections to help ensure that airport authorities are complying with TSA security requirements. We reported in June 2004 on TSA’s efforts to strengthen the security of airport perimeters (such as airfield fencing and access gates), the adequacy of controls restricting unauthorized access to secured areas (such as building entry ways leading to aircraft), and security measures pertaining to individuals who work at airports. At the time of our review, we found TSA had begun evaluating commercial airport security but needed a better approach for assessing results. In addition, TSA required criminal history records checks and security awareness training for most, but not all, the airport workers called for in ATSA. Further, TSA did not require airport vendors with direct access to the airfield and aircraft to develop security programs, which would include security measures for vendor employees and property, as required by ATSA. TSA is responsible for, and, at the time of our 2004 review, had begun evaluating the security of airport perimeters and the controls that limit access into secured airport areas, but had not yet determined how the results of these evaluations could be used to make improvements to the nation’s airport system as a whole. Specifically, we found that TSA had begun conducting regulatory compliance inspections, covert testing of selected security procedures, and vulnerability assessments at selected airports. These evaluations—though not yet completed at the time of our report—identified perimeter and access control security concerns. For example, TSA identified instances where airport operators failed to comply with existing security requirements, including requirements related to access control. In addition, TSA identified threats to perimeter and access control security at each of the airports where vulnerability assessments were conducted in 2003. TSA had plans to begin conducting joint vulnerability assessments with the FBI but had not yet determined how it would allocate existing resources between its own independent airport assessments and the new joint assessments, or developed a schedule for conducting future vulnerability assessments. In addition, TSA had not yet determined how to use the results of its inspections in conjunction with its efforts to conduct covert testing and vulnerability assessments to enhance the overall security of the nation’s commercial airport system. In June 2004, we also reported that background checks were not required for all airport workers. TSA requires most airport workers who perform duties in secured and sterile areas to undergo a fingerprint-based criminal history records check. TSA further requires airport operators to compare applicants’ names against TSA’s aviation security watch lists. Once workers undergo this review, they are granted access to airport areas in which they perform duties. For example, those workers who have been granted unescorted access to secured areas are authorized access to these areas without undergoing physical screening for prohibited items (which passengers undergo prior to boarding a flight). To meet TSA requirements, airport operators transmit applicants’ fingerprints to a TSA contractor, who in turn forwards the fingerprints to TSA, who submits them to the FBI to be checked for criminal histories that could disqualify an applicant for airport employment. In March 2006, that TSA contractor reported that its background clearinghouse system had processed over 2 million criminal history record checks of airport and airline employees. TSA also requires that airport operators verify that applicants’ names do not appear on TSA’s “no fly” and “selectee” watch lists to determine whether applicants are eligible for employment. According to TSA, by December 6, 2002, all airport workers who had unescorted access to secured airport areas—approximately 900,000 individuals nationwide—had undergone a fingerprint-based criminal history records check and verification that they did not appear on TSA’s watch lists, as required by regulation. In late 2002, TSA required airport operators to conduct fingerprint-based checks and watch list verifications for an additional approximately 100,000 airport workers who perform duties in sterile areas. As of April 2004, TSA said that airport operators had completed all of these checks. ATSA also mandates that TSA require airport operators and air carriers to develop security awareness training programs for airport workers such as ground crews, and gate, ticket, and curbside agents of air carriers. However, while TSA requires such training for these airport workers if they have unescorted access to secured areas, the agency did not require training for airport workers who perform duties in sterile airport areas. According to TSA, training requirements for these airport workers have not been established because additional training would result in increased costs for airport operators. Further, TSA had not addressed the act’s provision that calls for the agency to require that airport vendors with direct access to the airfield and aircraft develop security programs to address security measures specific to vendor employees (companies doing business in or with the airport). TSA said that expanding requirements for background checks and security awareness training for additional workers and establishing requirements for vendor security programs would be costly to implement and would require time-consuming rule-making efforts to assess potential impacts and obtain and incorporate public comment on any proposed regulations. In June 2004, we recommended, and DHS generally agreed, that TSA better justify future decisions on how best to proceed with security evaluations and implement additional measures to reduce the potential security risks posed by airport workers. In July 2004, in response to our recommendations, TSA made several improvements in these areas, through the issuance of a series of security directives, including requiring enhanced background checks and improved access controls for airport employees who work in restricted airport areas. Since its inception, TSA has achieved significant progress in deploying its federal aviation security workforce to meet congressional mandates related to establishing passenger and checked baggage screening operations. With the initial congressional mandates now largely met, TSA has turned its attention to more systematically deploying its TSO workforce and assessing and enhancing its effectiveness in screening passengers and checked baggage. TSA has developed a staffing model intended to identify the necessary levels of TSOs to support airport screening operations. However, given the challenges TSA faces in determining appropriate staffing levels at airports, it is critical that TSA carefully consider how it strategically hires, deploys and manages its TSO workforce to help strengthen its passenger and checked baggage screening programs. In addition, as threats and technology evolve, it is vital that TSA continue to enhance training for the TSO workforce. Over the past several years, TSA has strengthened its TSO training program in an effort to ensure that TSOs have the knowledge and skills needed to successfully perform their screening functions. However, without addressing the challenges to delivering ongoing training, including installing high-speed connectivity at airport training facilities, TSA may have difficulty maintaining a screening workforce that possesses the critical skills needed to perform at a desired level. The importance of the nation’s air cargo security system and the limited resources available to protect it underscore the need for a risk management approach to prioritize security efforts so that a proper balance between costs and security can be achieved. TSA has taken important steps in establishing such a risk management approach, but more work remains to be done to fully address the risks posed to air cargo security, including assessments of systemwide vulnerabilities and critical assets. Without such assessments, TSA is limited in its ability to focus its resources on those air cargo vulnerabilities that represent the most critical security needs. In addition, without performance measures to gauge air carrier and indirect air carrier compliance with air cargo security requirements and analyzing the results of its compliance inspections, TSA cannot effectively focus its inspection resources on those entities posing the greatest risk. In addition, TSA’s goal of developing a system to target elevated risk cargo for inspection without impeding the flow of air commerce will be difficult to achieve without ensuring that the information used to target such cargo is complete, accurate, and current. By addressing these areas, TSA would build a better basis for strengthening air cargo security as it moves forward in implementing risk- based security initiatives. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other members of the Committee may have at this time. For further information on this testimony, please contact at Cathleen A. Berrick, (202) 512-3404 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the contact named above, John Barkhamer, Amy Bernstein, Kristy Brown, Philip Caramia, Kevin Copping, Glenn Davis, Christine Fossett, Thomas Lombardi, Laina Poon, and Maria Strudwick made key contributions to this testimony. Aviation Security: Significant Management Challenges May Adversely Affect Implementation of the Transportation Security Administration’s Secure Flight Program. GAO-06-374T. Washington, D.C.: February 9, 2006. Aviation Security: Federal Air Marshal Service Could Benefit from Improved Planning and Controls. GAO-06-203. Washington, D.C.: November 28, 2005. Aviation Security: Federal Action Needed to Strengthen Domestic Air Cargo Security. GAO-06-76. Washington, D.C.: October 17, 2005. Transportation Security Administration: More Clarity on the Authority of Federal Security Directors Is Needed. GAO-05-935. Washington, D.C.: September 23, 2005. Aviation Security: Flight and Cabin Crew Member Security Training Strengthened, but Better Planning and Internal Controls Needed. GAO-05-781. Washington, D.C.: September 6, 2005. Aviation Security: Transportation Security Administration Did Not Fully Disclose Uses of Personal Information During Secure Flight Program Testing in Initial Privacy Notes, but Has Recently Taken Steps to More Fully Inform the Public. GAO-05-864R. Washington, D.C.: July 22, 2005. Aviation Security: Better Planning Needed to Optimize Deployment of Checked Baggage Screening Systems. GAO-05-896T. Washington, D.C.: July 13, 2005 Aviation Security: Screener Training and Performance Measurement Strengthened, but More Work Remains. GAO-05-457. Washington, D.C.: May 2, 2005. Aviation Security: Secure Flight Development and Testing Under Way, but Risks Should Be Managed as System Is Further Developed. GAO-05- 356. Washington, D.C.: March 28, 2005 Aviation Security: Systematic Planning Needed to Optimize the Deployment of Checked Baggage Screening Systems. GAO-05-365. Washington, D.C.: March 15, 2005. Aviation Security: Measures for Testing the Effect of Using Commercial Data for the Secure Flight Program. GAO-05-324. Washington, D.C.: February 23, 2005. Transportation Security: Systematic Planning Needed to Optimize Resources. GAO-05-357T. Washington, D.C.: February 15, 2005. Aviation Security: Preliminary Observations on TSA’s Progress to Allow Airports to Use Private Passenger and Baggage Screening Services. GAO-05-126. Washington, D.C.: November 19, 2004. General Aviation Security: Increased Federal Oversight Is Needed, but Continued Partnership with the Private Sector Is Critical to Long-Term Success. GAO-05-144. Washington, D.C.: November 10, 2004. Aviation Security: Further Steps Needed to Strengthen the Security of Commercial Airport Perimeters and Access Controls. GAO-04-728. Washington, D.C.: June 4, 2004. Transportation Security Administration: High-Level Attention Needed to Strengthen Acquisition Function. GAO-04-544. Washington, D.C.: May 28, 2004. Aviation Security: Challenges in Using Biometric Technologies. GAO-04- 785T. Washington, D.C.: May 19, 2004. Nonproliferation: Further Improvements Needed in U.S. Efforts to Counter Threats from Man-Portable Air Defense Systems. GAO-04-519. Washington, D.C.: May 13, 2004. Aviation Security: Private Screening Contractors Have Little Flexibility to Implement Innovative Approaches. GAO-04-505T. Washington, D.C.: April 22, 2004. Aviation Security: Improvement Still Needed in Federal Aviation Security Efforts. GAO-04-592T. Washington, D.C.: March 30, 2004. Aviation Security: Challenges Delay Implementation of Computer- Assisted Passenger Prescreening System. GAO-04-504T. Washington, D.C.: March 17, 2004. Aviation Security: Factors Could Limit the Effectiveness of the Transportation Security Administration’s Efforts to Secure Aerial Advertising Operations. GAO-04-499R. Washington, D.C.: March 5, 2004. Aviation Security: Computer-Assisted Passenger Prescreening System Faces Significant Implementation Challenges. GAO-04-385. Washington, D.C.: February 13, 2004. Aviation Security: Challenges Exist in Stabilizing and Enhancing Passenger and Baggage Screening Operations. GAO-04-440T. Washington, D.C.: February 12, 2004. The Department of Homeland Security Needs to Fully Adopt a Knowledge-based Approach to Its Counter-MANPADS Development Program. GAO-04-341R. Washington, D.C.: January 30, 2004. Aviation Security: Efforts to Measure Effectiveness and Strengthen Security Programs. GAO-04-285T. Washington, D.C.: November 20, 2003. Aviation Security: Federal Air Marshal Service Is Addressing Challenges of Its Expanded Mission and Workforce, but Additional Actions Needed. GAO-04-242. Washington, D.C.: November 19, 2003. Aviation Security: Efforts to Measure Effectiveness and Address Challenges. GAO-04-232T. Washington, D.C.: November 5, 2003. Airport Passenger Screening: Preliminary Observations on Progress Made and Challenges Remaining. GAO-03-1173. Washington, D.C.: September 24, 2003. Aviation Security: Progress Since September 11, 2001, and the Challenges Ahead. GAO-03-1150T. Washington, D.C.: September 9, 2003. Transportation Security: Federal Action Needed to Enhance Security Efforts. GAO-03-1154T. Washington, D.C.: September 9, 2003. Transportation Security: Federal Action Needed to Help Address Security Challenges. GAO-03-843. Washington, D.C.: June 30, 2003. Federal Aviation Administration: Reauthorization Provides Opportunities to Address Key Agency Challenges. GAO-03-653T. Washington, D.C.: April 10, 2003. Transportation Security: Post-September 11th Initiatives and Long- Term Challenges. GAO-03-616T. Washington, D.C.: April 1, 2003. Airport Finance: Past Funding Levels May Not Be Sufficient to Cover Airports’ Planned Capital Development. GAO-03-497T. Washington, D.C.: February 25, 2003. Transportation Security Administration: Actions and Plans to Build a Results-Oriented Culture. GAO-03-190. Washington, D.C.: January 17, 2003. Aviation Safety: Undeclared Air Shipments of Dangerous Goods and DOT’s Enforcement Approach. GAO-03-22. Washington, D.C.: January 10, 2003. Aviation Security: Vulnerabilities and Potential Improvements for the Air Cargo System. GAO-03-344. Washington, D.C.: December 20, 2002. Aviation Security: Registered Traveler Program Policy and Implementation Issues. GAO-03-253. Washington, D.C.: November 22, 2002. Airport Finance: Using Airport Grant Funds for Security Projects Has Affected Some Development Projects. GAO-03-27. Washington, D.C.: October 15, 2002. Commercial Aviation: Financial Condition and Industry Responses Affect Competition. GAO-03-171T. Washington, D.C.: October 2, 2002. Aviation Security: Transportation Security Administration Faces Immediate and Long-Term Challenges. GAO-02-971T. Washington, D.C.: July 25, 2002. Aviation Security: Information Concerning the Arming of Commercial Pilots. GAO-02-822R. Washington, D.C.: June 28, 2002. Aviation Security: Vulnerabilities in, and Alternatives for, Preboard Screening Security Operations. GAO-01-1171T. Washington, D.C.: September 25, 2001. Aviation Security: Weaknesses in Airport Security and Options for Assigning Screening Responsibilities. GAO-01-1165T. Washington, D.C.: September 21, 2001. Homeland Security: A Framework for Addressing the Nation’s Efforts. GAO-01-1158T. Washington, D.C.: September 21, 2001. Aviation Security: Terrorist Acts Demonstrate Urgent Need to Improve Security at the Nation’s Airports. GAO-01-1162T. Washington, D.C.: September 20, 2001. Aviation Security: Terrorist Acts Illustrate Severe Weaknesses in Aviation Security. GAO-01-1166T. Washington, D.C.: September 20, 2001. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
It has been over 3 years since the Transportation Security Administration (TSA) assumed responsibility for passenger and baggage screening at commercial airports. This testimony focuses on the progress TSA is making in strengthening aspects of aviation security and the challenges that remain. Particularly, this testimony highlights (1) progress TSA has made, and challenges it faces, in managing a federalized security workforce--including federal security directors (FSD) and transportation security officers (TSO)--with operational responsibility for ensuring security of passengers and their baggage; and (2) actions TSA has taken, and the challenges it faces, to ensure appropriate regulatory oversight of other airport security activities. TSA has made progress in managing, deploying, and training a federalized aviation security workforce, including FSDs (the lead authority at U.S. airports) and TSOs (formerly known as screeners). FSDs have, for example, formed partnerships with key federal and private-sector stakeholders at airports engaged in security and operations. We reported, however, that the guidance on FSD authority is outdated and lacks clarity, particularly regarding security incidents when FSDs must coordinate with other stakeholders. Regarding TSOs, TSA has taken and has planned actions to strengthen the management and deployment of the TSO workforce. TSA has, for instance, developed a screening allocation model to determine TSO staffing levels at airports. However, FSDs have reported concerns that despite such a model, attracting, hiring, and retaining an adequate part-time TSO workforce remains a challenge. We have reported that, while TSA has expanded training opportunities for TSOs, insufficient TSO staffing and other problems hinder the ability of TSOs to take training. To evaluate TSO performance, TSA has collected performance data by conducting covert (undercover, unannounced) tests at passenger screening checkpoints. TSA has taken steps to strengthen key areas of aviation security for which it has regulatory and oversight responsibility, including domestic air cargo security, but faces challenges related to oversight and performance measurement. We reported in October 2005, for example, that while TSA had significantly increased the number of domestic air cargo inspections conducted, performance measures to determine to what extent air carriers and others are complying with air cargo security requirements had not been developed. Without such performance measures, and a systematic analysis of these results of air cargo security inspections, TSA's ability to target its workforce for future inspections, and fulfill oversight responsibilities, will be limited. Further, while TSA has incorporated elements of risk-based decision making into securing air cargo, its efforts are not yet complete. To address these and other issues, TSA officials stated that they plan to compile additional information on air cargo inspections to enhance their ability to conduct compliance inspections of air carriers using covert testing, and to require random inspection of air cargo.
This section discusses the Corps’ organizational structure; the Olmsted Locks and Dam project; the project’s timeline, maximum project cost, funding, and construction method; and the economic benefits and costs of navigation projects. Located within the Department of Defense, the Corps has both military and civilian responsibilities. Through its Civil Works program, the Corps plans, designs, constructs, operates, and maintains a wide range of water resources projects for purposes such as navigation, flood control, and environmental restoration. The Civil Works program is organized into three tiers: headquarters in Washington, D.C.; eight regional divisions that were established generally according to watershed boundaries; and 38 districts nationwide. The eight divisions, commanded by military officers, coordinate civil works projects in the districts within their respective geographic areas. Corps districts, also commanded by military officers, are responsible for planning, engineering, constructing, and managing projects in their districts. Each project has a project delivery team of civilian employees that manages the project over its life cycle. Each team is led by a project manager and comprises members from the planning, engineering, construction, operations, and real estate functions. The Louisville District, located within the Great Lakes and Ohio River Division, is responsible for managing the Olmsted project. In addition, the Civil Works program maintains a number of centers of expertise to assist Corps division and district offices. One of these centers is the Cost Engineering and Agency Technical Review Mandatory Center of Expertise located in Walla Walla, Washington. This center provides technical support and assistance to the districts on cost engineering issues, such as developing cost estimates and performing agency technical reviews of cost estimates included in all decision documents. The Olmsted Locks and Dam project is located at Ohio River Mile 964.4 between Ballard County, Kentucky, and Pulaski County, Illinois (see fig. 1). The project replaces Locks and Dams 52 and 53, which were completed in 1928 and 1929, respectively. Temporary 1,200-foot-long lock chambers were added in 1969 at Locks and Dam 52, and in 1979 at Locks and Dam 53. Because of their antiquated design and age, these structures are unable to meet current traffic demands without significant delays, according to Corps documents. Corps documents also stated that the existing structures have deteriorated and are overstressed during normal operating conditions. The temporary locks at Locks and Dams 52 and 53 have significantly passed their 15-year design life. The Olmsted project consists of two 110-by-1,200-foot locks adjacent to the Illinois bank, and a dam composed of five 110-foot-wide tainter gates, a 1,400-foot-wide navigable pass controlled by 140 boat- operated wickets, and a fixed weir extending to the Kentucky bank (see fig. 2). A lock and dam enable vessels to navigate through a shallow or steep section of river. A lock is an enclosed chamber in a waterway with watertight gates at each end, for raising or lowering vessels from one water level to another by admitting or releasing water. A dam is a barrier that is built across a stream or river to obstruct the flow of water, creating a pool of water deep enough to allow boats and barges to move upstream or downstream. Once the Olmsted dam is completed, the wickets will be raised during periods when the river is low to maintain the upper pool and lowered at other times to form a navigable pass, allowing river traffic to pass through without going through a lock. The tainter gates can be raised or lowered to adjust water flow without adjusting the dam. Figure 3, an interactive graphic, shows a timeline of key events in the Olmsted Locks and Dam project. WRDA 1988 authorized construction of the Olmsted project at a cost of $775 million (in October 1987 price levels) based on the report of the Chief of Engineers, dated August 20, 1986. The authorized cost was based on the detailed baseline cost estimate for the recommended plan presented in the 1985 Lower Ohio River Navigation Feasibility Report. At the time of authorization, the Corps estimated that construction would take 7 years. As with all civil works projects, the authorized cost does not include inflation and is based on the assumption that the project will receive the maximum amount of appropriations that can be efficiently and effectively used each year. The Corps received its first appropriation for construction in fiscal year 1991, and awarded the first major construction contract in 1993 for the construction of the lock cofferdam. When Congress authorizes a specific amount of money for a project, this authorized project cost provides the basis for the project’s maximum cost. Section 902 of WRDA 1986, as amended, defines the maximum project cost as the sum of (1) the authorized cost, with the costs of unconstructed project features adjusted for inflation; (2) the costs of modifications that do not materially alter the scope of the project, up to 20 percent of the authorized cost (without adjustment for inflation); and (3) the cost of additional studies, modifications, and actions authorized by WRDA 1986 or any later law or required by changes in federal law. The maximum cost is known as the 902 limit. Each district with an ongoing construction project is to update the 902 limit established for the project to account for inflation every time the district calculates a new cost estimate or benefit-to-cost ratio. If the project’s estimated costs are approaching the 902 limit, the project delivery team may start preparing a PACR to seek an increase in the project’s authorized cost. If the project’s actual costs reach its 902 limit before congressional action, construction must stop until the project gets a new authorization that increases its costs and therefore its 902 limit. The Corps’ Civil Works program typically receives an appropriation annually through the Energy and Water Development Appropriations Act or an omnibus appropriations act. These acts have typically appropriated a sum to each civil works appropriation account, including investigations, construction, and operation and maintenance, to fund projects related to the nation’s water resources. Accompanying congressional reports often specifically list individual projects and the amount directed to each project. When the Olmsted project was first authorized in WRDA 1988, its construction costs were to be shared equally between funds appropriated to the Corps and from the Inland Waterways Trust Fund. The trust fund receives a portion of the revenue from a fuel tax levied on commercial towing companies using the inland and intracoastal waterways. The trust fund is administered by the U.S. Department of the Treasury. However, after congressional appropriation of revenues from the fuel tax and Office of Management and Budget apportionment, the Corps is responsible for determining the timing and amount of trust fund expenditures. By 2009, however, the Olmsted project was using the majority of trust fund appropriations, which constrained the amount available for other projects on the inland navigation system. In 2014, two laws were enacted that reduced the trust fund’s contribution for Olmsted construction costs from 50 to 25 percent in fiscal year 2014 and then to 15 percent in subsequent years. The Olmsted dam is being constructed using a construction method called in-the-wet, in which concrete sections of the dam, known as shells, are built on shore and then carried out into the river and set in place in the riverbed. At Olmsted, the shells are lifted by a wheel-mounted super gantry crane—the largest crane of its kind in the world and capable of lifting 5,100 tons—along rails and taken to the shore. The shells are then floated out onto the river by a catamaran barge that has a capacity of 4,500 tons and lowered onto foundations installed in the riverbed. This method differs from traditional in-the-dry construction, which uses cofferdams to drain the riverbed to allow work, such as building the Olmsted locks. A cofferdam is a temporary, watertight structure that surrounds a construction site to prevent water from flooding the area. Cofferdams can vary in design from simple earthen dikes heaped up around a construction site, to more complicated and costly structures constructed of steel sheet piling. Federal guidance serves as the key source for the Corps’ analyses of the benefits and costs associated with alternative plans for achieving water and related land resource objectives. Based on this guidance, the Corps is to identify the project plan that would provide the greatest net benefit to society. Moreover, the Corps is to identify and clearly describe areas of risk and uncertainty so that it can make decisions knowing the reliability of the estimated benefits and costs and of the effectiveness of alternative plans. To estimate benefits and costs, the Corps compares the economic conditions expected under the proposed alternatives with those expected without the project (i.e., business as usual) during the period of analysis (e.g., 50 years). Potential benefits include any reduction in the transportation cost for barge traffic expected to use the waterway over the analysis period. Potential costs include the outlays made to construct the project (e.g., for labor and materials) and interest during construction, which represents the hypothetical return or “benefit” that could have been earned by investing the money in some other use. To calculate the interest cost, compound interest is added to the construction costs incurred during the construction period, at the applicable project discount rate, from the date the expenditures are incurred to the beginning of the period of analysis (i.e., the date the project begins to generate benefits). Federal policy establishes the discount rate for this purpose. According to Corps planning guidance for civil works projects, the total investment cost of the project equals construction cost plus interest during construction. The Corps has conducted several analyses of the Olmsted project’s benefits and costs, beginning with a feasibility study in 1985. The Corps later updated its estimates in 1990 (Benefit Update) and in 2012 (PACR). According to Corps economists, the PACR analysis of benefits and costs was thoroughly reviewed, within the Corps and by an independent peer review panel. Also in 2012, the Corps used the PACR analysis to examine the benefits and costs associated with changing the construction method for the dam from the in-the-wet method to the more traditional in-the-dry method. Reports by the Corps and others identified the in-the-wet construction method, the contract type, and other factors as primary contributors to cost increases and schedule delays in the Olmsted project, most of which were associated with constructing the dam. The PACR and the 2012 consultant report identified the selection of the in-the-wet method to construct the dam as contributing to cost increases and schedule days. In addition, the Corps’ decision to use a cost-reimbursement contract contributed to increased management costs, according to the PACR and the 2008 consultant report. The reports by the Corps and others also identified other key factors that contributed to cost increases and schedule delays, including limited funding, changes in market conditions, and design changes. The Corps’ 1997 decision to construct the Olmsted dam using the in-the- wet method was based on projections that this method would cost less and would allow the project to be completed more rapidly than the traditional in-the-dry method. The Corps had originally planned to construct the Olmsted project using an in-the-dry method using four cofferdams. However, the Olmsted project was the subject of many studies and reviews seeking to improve on the authorized plan by incorporating innovative design and construction methods, according to the PACR. One of these methods was in-the-wet construction, which had been used to construct tunnels and bridges in a marine environment, but which had not been used to construct a project such as Olmsted in a river environment. In the early to mid-1990s, the Corps commissioned several studies to look at different ways to construct the dam, including using the in-the-wet method. One study examined using a mobile cofferdam instead of a conventional fixed cofferdam. Another study looked at alternate methods for constructing the tainter gate section of the dam. A third study performed a life cycle cost analysis of five different alternatives of dam types and construction methods, including in-the-dry, in-the-wet, and a combination of the two methods. A fourth study, issued in December 1997, evaluated and compared using the in-the-wet and in-the-dry construction methods, as well as using a combination of both methods to provide a basis for deciding between the methods. This study found that using the in-the-wet method under two different construction schedule scenarios would cost either $54.9 million less and allow the project to be completed 2 years earlier or about $63.2 million less and be completed 5- 1/2 years earlier. Prior to the issuance of the 1997 study, the Corps established a team of Corps engineers, program managers, and others to review the study and recommend a construction method. The team members evaluated specific project components, including structural engineering, cost estimating, and design. The team said in a July 1997 document that it would be feasible to construct the dam with either the in-the-wet or in-the- dry method. However, using the in-the-wet method option would more likely allow the project to be completed 1-1/2 to 2 years earlier than using the in-the-dry method and the estimated cost savings would be approximately $40 million. Some team members expressed concerns with the in-the-wet method, including three engineers, one of whom stated that the in-the-wet method’s foundation would be more expensive than the foundation required for the in-the-dry method, another who expressed doubts over whether the project would be finished according to schedule, and another who noted that the Corps’ Louisville District had little or no experience using the in-the-wet method. The Corps district decided to use the in-the-wet method, citing four reasons—lower cost, shorter construction schedule, less impact on navigation during construction, and the potential for fewer negative environmental impacts. At the time, the Corps’ decision to select in-the-wet as the method of construction was not required to undergo an agency technical review or an independent external peer review. The PACR stated that the independent government estimate for the in- the-wet dam construction was low and that cost increases resulted from several factors that were not known at the time of the contract award. These include certain river conditions that slowed construction, the effect of the site’s seismic conditions on fabricating the shells, and funding constraints. Also, the 2012 consultant report stated that the independent government estimate, prepared in 2003, inadequately characterized the uncertainty and risk in pursuing an innovative in-the-wet construction method and set expectations of project cost and duration far too low. The Corps agreed with the consultant’s findings and recommendation that the agency undertake research and development to generate more robust cost and schedule estimates when using novel technology such as in-the-wet construction. According to the PACR, the construction challenges associated with the in-the-wet construction method were overcome but required “a lot more effort than ever could have been envisioned.” Also, according to a Corps official, there was a learning curve associated with the in-the-wet method and one-of-a kind infrastructure that cost more than the Corps thought. For example, according to the PACR, as the project design continued following the 1989 General Design Memorandum, the Corps planned to construct a hydraulic wicket dam. In May 1994, the Corps awarded a contract to construct a full-sized prototype of the dam to test how the gate would operate and to test maintenance procedures, and this contract was completed in December 1995. This modeling revealed the complexity of the design, and the Corps revised the design to construct tainter gates and boat-operated wickets instead. In addition, the PACR stated that the in-the-wet method required specialized equipment that increased costs, such as the super gantry crane and the catamaran barge, which have minimal salvage value. In January 2012, the Corps’ Deputy Commanding General for Civil and Emergency Operations directed the Great Lakes and Ohio River Division to explore alternative construction methods and to present recommendations to Corps headquarters by June 1, 2012. In providing this direction, the Deputy Commanding General stated that the in-the-wet construction method had proven more expensive and time-consuming than originally envisioned. Among other things, the division was to develop concept-level designs for in-the-dry construction that could be used to develop a reliable cost estimate, and to compare that estimate to the in-the-wet estimate. The Corps completed its review of the in-the-wet versus in-the-dry methods in a May 2012 study, which underwent agency technical review and was certified by its Cost Engineering and Agency Technical Review Mandatory Center of Expertise. The study concluded that constructing the dam components using the in-the-dry method was a technically feasible alternative. The study found that continuing to use the in-the-wet method would cost more than switching to the in-the-dry method, but it would allow the project to be operational sooner. Specifically, the study estimated that the in-the-dry method would cost $2.810 billion compared to the PACR’s $2.918 billion estimate of performing the work with the in- the-wet method. However, the study found that using the in-the-dry method would result in the project not being operational until 2022, which is 2 years later than the PACR’s estimated operational date of 2020. A June 2012 Corps internal memorandum stated that based on the findings of the in-the-dry study, the Great Lakes and Ohio River Division recommended continuing to use the in-the-wet construction method for Olmsted. The memorandum stated that if the Corps changed course and used the in-the-dry method, it would require that a new contract be awarded. As a result, potentially two contracts would be ongoing for a period of time, which would likely exceed available funds and cause a delay. The memorandum also stated that because the Corps does not have the authority to use incremental funding or a continuing contracts clause, it would need to award another cost-reimbursement contract for the in-the-dry construction. The Deputy Commanding General for Civil and Emergency Operations directed the division to explore the possibility of soliciting opinions of industry rather than prescribing the construction method. In response, a Corps official presented the study’s findings in an August 2012 meeting of the Inland Waterways Users Board, which is composed of members of industry. This official said that the division recommended using the in-the-wet construction method, in part based on the Corps having learned from its experience with the construction and having become more efficient at setting shells. This official also stated that the contractor was about to begin setting shells for the navigable pass and, compared to the shells for tainter gates, these shells were smaller, lighter, and uniform in size, which would allow the contractor to set them more quickly. Board members stated that they deferred to the Corps as the engineering experts to decide on the method of construction. A Corps official said that the Corps decided to continue using the in-the-wet method in November 2012. The Corps’ decision to use a cost-reimbursement contract for the dam construction after not receiving offers for a firm fixed-price contract contributed to increased administrative and overhead costs, according to the PACR and the 2008 consultant report. In September 2002, the Corps requested proposals for the dam construction contract as a firm fixed-price contract—the contract type the agency typically uses for civil works projects—but received no offers. According to the 2008 Corps report and the 2012 consultant report, the agency received no offers because the construction method was innovative, the river conditions were too risky, and the contractor could not get bonding. The Corps amended the request for proposals to include, among other things, a provision that the government would pay a stipend for satisfactory and reasonable contractor proposals, but received no offers. After considering different options, the Corps decided to request proposals for a cost-reimbursement contract rather than a firm fixed-price contract. According to a district official, the construction of Olmsted dam was not practical for a firm fixed-price contract because of the risks to the contractor in undertaking a complex project and the unknowns associated with the in-the-wet construction method. Specifically, the Corps requested proposals for a cost-plus-award-fee contract, rather than a cost-plus- incentive-fee or a cost-plus-fixed-fee contract, because according to a Corps official, it was the best fit for the project. According to the Federal Acquisition Regulation, an award fee contract is suitable for use when the work to be performed is such that it is neither feasible nor effective to devise predetermined objective incentive targets applicable to cost, technical performance, and schedule. Alternatively, an incentive fee contract should be used when cost and performance targets are objective and can be predetermined, allowing a formula to adjust the negotiated fee based on variations relative to the targets. A district official stated that a cost-plus-incentive-fee contract was not appropriate because targets could not have been reasonably determined since the in-the-wet construction method had never been attempted before. Difficult river conditions provided additional risks to the contractor. According to the Corps official, a cost-plus-fixed-fee contract would not have provided sufficient incentive for the contractor because the fee would not change. In May 2003, the Corps requested proposals for the dam construction as a cost-plus-award-fee contract and received two offers, and awarded the contract in January 2004 to a joint venture. According to a Corps cost analysis of the proposals, the winning proposal included a lower maximum award fee of 5 percent, capped overhead costs, and had more overall budgeted cost savings than the other proposal. The winning proposal was $564 million, which was more than 25 percent higher than the independent government estimate. However, the Corps’ Office of the Chief Counsel said that the statutory prohibition on the Corps awarding a contract for river and harbor improvements with a price that exceeds 125 percent of the independent government estimate did not apply to the Olmsted dam contract because it was a cost-reimbursement contract. The PACR and the 2008 consultant report noted that the effort to manage a cost-reimbursement contract is more cost- and time-intensive than managing a firm fixed-price contract. For example, the PACR stated that there are additional activities associated with a cost-reimbursement contract, such as audit services, voucher reviews, and award fee evaluation boards. The PACR estimated that the Corps’ cost of construction management for these additional activities increased by more than $74 million (in October 2011 price levels), in part because the change in completion date had extended the construction schedule. The 2008 consultant report stated that the cost-reimbursement contract necessitated a substantial amount of administrative effort to track, record, and evaluate the contractor’s performance, and that doing so increased the Corps’ staff needs by approximately 40 percent. A district official said that the Corps hired 3 additional staff and the contractor hired 10 to 15 additional staff to perform these administrative tasks. In 2009, we reviewed federal agencies’ use of cost-reimbursement contracts and found that they involve significantly more government oversight than do fixed-price contracts, which means the government incurs additional administrative costs on top of what it is paying the contractor. For example, we found that the government must determine that the contractor’s accounting system is adequate for determining costs related to the contract and update this determination periodically. In addition, we found that contractor costs need to be monitored—known as cost surveillance—to provide reasonable assurance that efficient methods and effective cost controls are used. Another cost associated with the cost-reimbursement contract is evaluating the contractor’s award fee. For each evaluation period, the Corps is to assess the contractor’s performance against explicit criteria relating to cost, schedule, quality, and safety and environmental compliance, as set forth in the award fee plan. The 2012 consultant report found that the Olmsted project team did not have the experience to manage a cost-reimbursement contract, but that the team had instituted management methods and techniques to control project costs, many of which were industry best practices and consistent with Corps and Department of Defense guidance. The Corps agreed with the report’s recommendation that if the Corps plans to use a cost-reimbursement contract for other civil works projects, the agency needs to identify training required for project members when it develops the acquisition strategy. The report also concluded that the Corps’ management of the cost-reimbursement contract was not a significant factor in explaining the project’s cost and schedule overruns, and Corps officials we interviewed agreed. Within the last few years, the Corps has taken actions to help improve its management of civil works projects, including Olmsted. In 2012, the Corps designated Olmsted as a mega-project because of its cost, importance, and complexity, among other things. The Corps issued guidance in 2012 on managing mega-projects. According to the 2012 guidance, the Great Lakes and Ohio River Division is to provide progress reports to Corps headquarters and an integrated project schedule and cost estimate that the project team updates monthly. Corps officials said that the Corps created its Integrated Project Office in 2012 to help increase its management focus on Olmsted. In 2016, the Corps updated its mega-project guidance to require quarterly reports on such things as analysis of risk. The Corps also has daily, weekly, and monthly meetings to discuss how the dam contractor is staying on schedule, controlling cost, and managing risks. In 2014, the Corps adopted a recommendation from a 2010 report prepared by navigation industry representatives and Corps navigation experts to prioritize new construction and rehabilitation projects based on an examination of factors such as economic return, risk-based analysis, and the estimated cost and construction schedule. As a result, the Corps made Olmsted its top priority construction project. In the Corps’ March 2016 capital investment plan, prepared in response to WRRDA 2014, Olmsted remained its top priority construction project. The reports by the Corps and others also identified other key factors that contributed to cost increases and schedule delays, including limited funding, changes in market conditions, and design changes. The Olmsted project’s authorized cost was based on the Corps’ assumption that each year the agency would receive the maximum amount of funding that it could efficiently and effectively spend. However, according to the reports by the Corps and others, the Olmsted project was significantly underfunded in some years, which contributed to cost increases and schedule delays. Specifically, according to these reports, the amount the Corps allocated for the Olmsted project from its annual appropriation, together with the amount appropriated from the Inland Waterways Trust Fund, was less than optimal for construction, and in 2004 and 2005, the Corps reprogrammed appropriations from Olmsted to another project. Incremental funding from the Inland Waterways Trust Fund also contributed to delays and increased costs, according to the 2012 consultant report. According to the Corps reports, limited funding resulted in delayed contract awards and increased contract durations to conform to the funding received. For example, according to the PACR, the approach wall contract was awarded 2 years later than originally planned because of limited funding, which delayed the award of the dam contract by 2 years. About 2 months before the award of the Olmsted dam construction contract, the Corps told the offerors to develop revised estimates based on the assumption that $17.5 million would be available the first year, with $80 million available each year thereafter, which increased proposal costs by $18.2 million and added 1 year to the completion date, according to the PACR. However, according to the reports by the Corps and others, during the first 2 years of the dam contract, the project had less funding than assumed. Specifically, according to the 2012 consultant report, the dam contract received approximately $5 million of the anticipated $17.5 million in 2004. The other funds were reprogrammed to the McAlpine locks, which the Corps viewed as urgent because their failure would cause the Ohio River navigation system to fail. In 2005, funds were again reprogrammed, with the dam contract receiving approximately $47 million of the anticipated $80 million for the year. However, according to the 2008 Corps and 2012 consultant reports, reprogramming was curtailed significantly in fiscal year 2006 in accordance with the Energy and Water Development Appropriations Act and accompanying congressional committee reports. Also, according to a Corps headquarters official, in fiscal year 2003, the balance of the Inland Waterways Trust Fund, which generally pays half of the construction costs of navigation and rehabilitation projects, started to decline because so many projects were under construction. The official said that from fiscal years 2005 to 2009, there was a sharp decrease in the balance of the trust fund as fuel tax revenues started to decline, and that by fiscal year 2009, the fund was nearly depleted. As a result, expenditures from the fund were limited to the amount of annual fuel tax revenues collected for that particular year. According to the 2012 consultant report and the headquarters official, the Olmsted project was funded on a monthly basis, and this incremental funding also contributed to delays and increased costs. For example, incremental funding caused the 2009 shell fabrication season to be split between 2009 and 2010, according to the 2012 consultant report. According to the reports by the Corps and others, changes in construction market conditions contributed to increases in the cost of the dam. After the Corps awarded the dam contract in January 2004, unexpected and significant increases in the price of construction equipment and materials occurred. According to the PACR and the consultant reports, the 2005 hurricane season, which included Hurricanes Katrina and Rita, created a scarcity of barges and cranes at the time when the contractor was trying to mobilize the necessary equipment to construct the dam. Specifically, according to the 2012 consultant report, most of the barges scheduled for use in building the dam were under construction in shipyards along the Gulf Coast when the hurricanes struck. As a result, barge production slowed tremendously and prices doubled as the demand for existing barges increased because of the hurricane restoration efforts. Also, according to the reports by the Corps and others, domestic and international construction booms created a high demand for construction materials after the award of the construction contract. The Corps reports presented data from the U.S. Department of Labor’s Bureau of Labor Statistics, which showed that the price of construction materials increased significantly after 2004. According to the 2008 consultant report, from 2002 to 2007, the price of fabricated steel increased about 300 percent, the price of cement increased about 90 percent, the price of riprap increased by 100 to 200 percent, and the price of fuel increased about 300 percent. In addition, insurance and bonding cost increased about 230 percent. Since the dam construction contract was awarded in January 2004, the contractor’s proposal did not include these increases in the cost of materials. The reports by the Corps and others identified design changes during the dam construction as contributing to increased costs. However, the reports do not provide the amount by which the changes increased costs. Examples of design changes included the following: The consultant reports cited the use of a super gantry crane instead of sleds to move the precast shells into the river as a design change that contributed to increased cost. The Corps’ 2016 Lessons Learned Report stated that the change was made because design issues related to sled deflection could not be overcome. The PACR and the 2012 consultant report cited the need to reinforce the site for the shell precast yard and the marine skidway as contributing to increased cost. According to the PACR, after awarding the construction contract, it was determined that the soil conditions at the site for the precast yard and the marine skidway were inadequate to support the foundation loads and that an extensive amount of piling was required to support their weight. The Corps reports and the 2012 consultant report cited the need to address slope stability issues on the shore as contributing to increased cost. The Corps reports stated that an active slide was observed during monitoring of the Illinois bank at the site of the locks. A district official said that the Corps observed the slide. Defining the extent of the slide problem and determining the best solution required additional effort. The Corps reports stated that these problems also added to the effort required to design and build the precast yard and marine launching facility. The PACR cited the need to increase the length of the foundation piles for the tainter gate portion of the dam and to conduct additional excavation because of sand waves as contributing to increased cost. According to a district official, sand waves are constantly migrating downriver to the construction site, and as sand collects on the footprint of the foundation, the riverbed has to be excavated so that shells can be set correctly, which increases cost. The total cost of benefits foregone from project delays that have occurred at Olmsted is uncertain, primarily because the estimates that the Corps developed for the project are no longer relevant or are of limited use for estimating the benefits that might have been generated had the project become operational as planned in 2006. The extent to which the project incurs another type of benefit foregone—the additional interest during construction incurred because of the longer construction period— depends on economic factors, such as the project discount rate. The benefits that the Olmsted project would have generated had it become operational as planned in 2006 are uncertain, primarily because the estimates that the Corps has made are no longer relevant or are of limited use for this purpose. The Corps analyzed the benefits and costs associated with the project several times, including in a 1990 study. In that study, for example, the Corps estimated that the project would begin generating average annual benefits of about $920 million in 2006. According to the PACR, the Olmsted locks and dam project once operational would reduce the cost of shipping products on the Ohio River by processing barge shipments more efficiently than the two existing locks and dams. Corps officials said, however, that this estimate is no longer relevant for estimating the benefits foregone from past project delays. In particular, as noted in the PACR, the 1990 study did not anticipate the regulatory and market factors that reduced the demand for coal and coal shipments on the Ohio River, beginning in the 1990s. In addition, because the 1990 study did not assess the uncertainty associated with key assumptions, such as the barge traffic forecast, it cannot be used to assess what the benefits might have been, beginning in 2006, under lower barge traffic forecast assumptions. In general, fewer barge shipments mean less congestion and delay and lower benefits from replacing the existing locks and dams, if all else remains the same. In 2012, the Corps updated its analysis of the benefits and costs associated with the Olmsted project, based on a revised operational date of 2020. The Corps estimated, for example, that the project would generate average annual benefits of about $875 million per year over 50 years, beginning in 2020. The Corps used the analysis to estimate the benefits foregone from potential delays in the future, should the project opening be delayed again. In a June 2012 presentation before the Inland Waterways Users Board, for example, the Corps indicated that a pause in construction at Olmsted (e.g., to shift funding to other Corps projects) could delay its opening 4 years to 2024, which could result in benefits foregone of about $3.5 billion ($875 million each year). The updated estimates from the PACR could be viewed as rough estimates of the benefits foregone since the delayed 2006 opening, but the estimates are of limited use for this purpose for several reasons. First, the PACR economic analysis assesses whether the potential benefits of the Olmsted project would outweigh its remaining costs. Corps economists said that the analysis was not designed to estimate the benefits foregone from project delays that occurred in the past, and as a result, the benefit estimates would be less reliable when used for that purpose. Second, the PACR estimates were based on assumptions about economic conditions expected in the future and may not represent the economic conditions that existed when past project delays occurred. For the PACR analysis, the Corps assumed that the existing locks and dams would need to be closed for repairs several times over the period of analysis (i.e., beginning in 2020) and that this would reduce the volume of shipments that could transit the locks during those closures. As a result, transportation cost savings could be generated by replacing the existing facilities with the Olmsted project, which is expected to be closed less often. These assumptions, however, may not align with the actual performance of the existing locks and dams in the past. For example, Corps economists said that the existing facilities have performed more reliably than expected, in part because funds were expended to maintain them in an operating condition. Moreover, changes in the PACR assumptions about the reliability of the existing locks and dams can significantly affect the benefit estimates. As a result, the PACR benefit estimates would be less reliable as a measure of benefits foregone if the assumptions about the expected performance of the existing facilities do not align with their actual performance in the past. Third, the benefit estimates, which are based on forecasts of barge shipments through the locks beginning in 2020, may not represent the actual traffic that transited the locks and dams in the past. For example, the PACR assumed that barge shipments through Locks and Dam 52 would reach about 113 million tons in 2020 and grow thereafter. This tonnage is greater than the roughly 94 million tons that the PACR indicates were shipped through the same locks in 2006—the year that the Olmsted project was projected to open. In addition, according to Corps documentation, barge shipments through the existing locks have generally fallen since 2006. Barge traffic is a key input in the benefit analysis because it is used in estimating the effect of congestion and delay at the locks and the transportation cost savings expected to be generated by replacing the existing structures with the Olmsted project. The PACR indicated that the benefit estimates are extremely sensitive to changes in barge traffic assumptions, but did not present the benefits associated with alternative traffic forecasts. Finally, the barge traffic forecasts on which the PACR benefit estimates are based were developed in the early 2000s. However, the forecasts do not incorporate factors that have reduced the demand for barge shipments, particularly for coal, since the forecasts were developed. According to the PACR, for example, coal is the dominant commodity in terms of volume on the Ohio River System. In 2015, we found that coal’s share of electricity generation had declined from 2001 through 2013, partly because of plant retirements brought about by comparatively low natural gas prices and the potential need to invest in new equipment to comply with environmental regulations. In addition, in 2014, we found that power companies plan to retire an even greater percentage of coal- fired generating capacity than expected earlier. The panel that conducted the peer review of the PACR in 2010 found that the traffic forecasts should be updated to include more recent actual barge traffic (i.e., for years 2006 through 2009) and that additional sensitivity testing should be conducted to analyze uncertainty associated with coal-related environmental issues. In addition, Corps officials said that barge shipments containing coal are expected to continue to decline over the short and medium terms, but that shipments for some other commodities, such as those related to natural gas production, have increased. According to Corps economists, another type of benefit foregone is the additional interest during construction incurred as a result of project delays. Corps guidance states that costs incurred during the construction period should be increased by adding compound interest at the applicable project discount rate from the date that expenditures are incurred up to the year the project begins operation. The interest represents the hypothetical return or “benefit” that could have been earned by investing the money in some other use. Delays that increase the construction period can also increase the interest because interest is compounded over a longer construction period. The Corps’ 1990 study assumed that the construction period would last from 1991 to 2006, and the PACR extended the time frame for the construction to 2024. To illustrate the potential effect of past delays on the interest cost during construction, we compared the Corps’ estimate of interest during construction from the PACR with its estimate of the interest during construction from the 1990 study. The interest estimate in the PACR represents the interest cost expected over the entire construction period estimated by the Corps, including delays, from 1991 through 2024. The interest estimate from the 1990 study represents the interest expected over a shorter construction period, from 1991 through 2006 (i.e., updated in terms of price level and present value using a 4 percent project discount rate). We found the difference in interest to be about $400 million, which represents the additional interest associated with factors such as changes in the project design, spending levels, and market conditions that led to the construction delays and increased construction costs. For the PACR analysis, for example, the Corps estimated that the Olmsted project would incur about $1.3 billion in interest during construction by the time construction was completed in 2024. Based on the 1990 study, the Olmsted project was expected to incur about $900 million in interest during construction. Nonetheless, the estimate of additional interest would change if factors such as the project discount rate were changed. For example, the additional interest cost would be about $300 million, based on the 7 percent discount rate that Office of Management and Budget economic guidance indicates should be used for evaluating proposed federal investments. We provided a draft of this report to the Department of Defense for review and comment. We received a written response from the department, reprinted in appendix II. The department said it appreciates the opportunity to review the report and it has no comments to add to the report. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix III. The following information appears as interactive content in figure 3 when viewed electronically. 1985: Lower Ohio River Navigation Feasibility Report The Louisville District of the U.S. Army Corps of Engineers (Corps) completed the Lower Ohio River Navigation Feasibility Report. The report recommended replacing Locks and Dams 52 and 53 with a single project consisting of a new set of locks and a new dam. Construction was estimated to take 7 years. 1986: Chief of Engineers Report The Chief of Engineers completed a report recommending that Congress authorize the construction of the Olmsted project. The report provided a detailed baseline cost estimate for the recommended plan presented in the 1985 feasibility report. 1988: Water Resources Development Act of 1988 The Water Resources Development Act of 1988 authorized construction of the Olmsted project at a cost of $775 million based on the Chief of Engineers Report, with the costs of construction shared equally between funds appropriated to the Corps and from the Inland Waterways Trust Fund. At the time of authorization, the Corps estimated that construction would take 7 years. 1989: General Design Memorandum The Louisville District issued its design plan for the Olmsted project. The plan estimated the total project cost to be $801 million (October 1988 price levels) and construction to take about 12 years. 1990: General Design Memorandum Supplement The Louisville District issued a modified project design plan resulting from comments on the General Design Memorandum and from changes in the dam configuration and project scope as presented in the General Design Memorandum. 1990: First appropriation for construction The Corps received its first appropriation for construction of the Olmsted project. 1993: Award of lock cofferdam construction contract The first major contract was awarded for the construction of the lock cofferdam. 1997: Method of Construction Study A consultant study compared the in-the-wet and in-the-dry construction methods and found that the in-the-wet method would cost less, provide the greatest schedule flexibility, and be just as reliable as in-the-dry construction. For these reasons, the study recommended that the Corps select the in-the-wet method to construct the dam. 1997: Decision to construct the dam using the in-the-wet method The Louisville District decided to construct the dam using the in-the- wet method because of lower cost, shorter construction schedule, reduced impact on navigation during construction, and potential for fewer negative environmental impacts. It was estimated that it would take 6 years to construct the dam. 1999: Design Memorandum No. 8, Dam The Louisville District issued its proposed design for the Olmsted dam, which incorporates changes made after the completion of the General Design Memorandum and Supplement. 2002: Request for proposals for the dam construction as a firm The Corps requested proposals for the dam construction contract as a firm fixed-price contract, but received no offers. 2003: Request for proposals for the dam construction as a cost- The Corps requested proposals for the dam construction as a cost- reimbursement contract and received two offers. 2004: Award of dam contract The Corps awarded the dam construction contract to a joint venture contractor. The winning proposal was $564 million. The dam was estimated to be completed in 8 years. 2006: Dam rebaseline estimate A rebaseline estimate increased the total estimated cost of the dam construction contract by approximately $81.6 million. 2011: Dam rebaseline estimate A rebaseline estimate extended the schedule by 4 to 5 years and increased the cost of the dam construction contract by approximately $551.1 million. 2012: In-the-Dry Study The Corps conducted a study to determine whether to complete the dam using the in-the-wet construction method or the in-the-dry method. The study estimated that continuing to use in-the-wet construction would cost more, but would allow the project to be completed sooner. For this reason, the Corps decided to complete the dam using the in-the-wet method. 2012: Post-authorization change report Because the project would exceed its maximum authorized cost, the Corps submitted a post-authorization change report to Congress in 2012, seeking an increase in the Olmsted project’s authorized cost to $2.918 billion, with an estimated completion date of 2024. 2013: Continuing Appropriations Act, 2014 The Continuing Appropriations Act, 2014, increased the Olmsted project’s authorized cost to $2.918 billion. 2014: Consolidated Appropriations Act, 2014 The Consolidated Appropriations Act, 2014, provided that for fiscal year 2014, 25 percent of the funding proposed for the Olmsted project would be derived from the Inland Waterways Trust Fund. 2014: Water Resources Reform and Development Act of 2014 The Water Resources Reform and Development Act of 2014 specified that beginning with fiscal year 2015, only 15 percent of the Olmsted project’s construction costs are to be paid from the Inland Waterways Trust Fund. Anne-Marie Fennell, (202) 512-3841 or [email protected]. In addition to the contact named above, Vondalee R. Hunt (Assistant Director), Marie Bancroft, Timothy Guinane, and Susan Malone made key contributions to this report. Important contributions were also made by Michael Armes, Martin (Greg) Campbell, Patricia Farrell Donahue, Jason Lee, Oliver Richard, Dan Royer, Jeanette Soares, Kiki Theodoropoulos, and William T. Woods.
The Corps is responsible for planning and constructing the Olmsted Locks and Dam project on the Ohio River, 17 miles upstream from the Mississippi River. The project will replace two locks and dams, which are beyond their design lives, with new locks and a new dam. According to the Corps, more tonnage passes through Olmsted annually than any other place in the nation's inland navigation system. The Water Resources Development Act of 1988 authorized the Olmsted project at a cost of $775 million. The Corps estimated construction would take 7 years. In 2012, the Corps submitted a PACR to Congress, seeking to increase the Olmsted project's authorized cost to $2.918 billion, with an estimated completion date of 2024. The Water Resources Reform and Development Act of 2014 included a provision for GAO to report on why the Olmsted project exceeded its budget and was not completed as scheduled, among other things. This report examines (1) the factors that the Corps and others have identified as contributing to cost increases and schedule delays and (2) what is known about the costs of benefits foregone because of project delays. GAO compared the factors cited in the PACR and three relevant Corps and consultant reports, examined the Corps' economic analyses and developed an estimate of construction interest incurred because of project delays, and interviewed Corps officials and industry representatives. GAO is not making recommendations in this report. The Department of Defense had no comments to add to the report. Reports by the U.S. Army Corps of Engineers (Corps) and consultants it hired identified the construction method, contract type, and other factors as primary contributors to cost increases and schedule delays in the Olmsted Locks and Dam project. Specifically, the 2012 Corps' post-authorization change report (PACR) and a 2012 consultant report identified the Corps' 1997 selection of an innovative in-the-wet method to construct the dam as a contributing factor. With this method, concrete sections of the dam, or shells, are built on shore, carried out into the river, and set in place in the riverbed. The Corps decided to use this method based on projections that it would cost less and allow the project to be completed sooner than the traditional in-the-dry method using temporary, watertight structures, or cofferdams, to drain the riverbed to allow work. However, the Corps' initial cost estimate was low and did not adequately consider such things as river conditions that slowed construction. A 2012 Corps study compared the in-the-wet and in-the-dry methods and found that continuing to use the in-the-wet method would cost more but would allow the project to be completed sooner. Based on this study, the Corps continued to use the in-the-wet method. In addition, the PACR and a 2008 consultant report found that the Corps' decision to use a cost-reimbursement contract for the dam construction after receiving no offers for a firm fixed -price contract contributed to increased administrative and overhead costs. The reports noted that managing a cost-reimbursement contract was more cost- and time-intensive than managing a firm fixed-price contract, which the Corps typically uses. The Corps and consultant reports also identified other contributing factors, including limited funding; market condition changes, such as unexpected and significant increases in the price of construction materials; and design changes during the dam construction in response to soil conditions and other issues. The benefits foregone because of delays at Olmsted are uncertain, primarily because the Corps' estimates for the project are no longer relevant or are of limited use for estimating the benefits that might have been generated had the project opened as planned in 2006. The Corps estimated the benefits associated with the project several times, including in a 1990 study. Corps officials said, however, that the benefit estimates from this study are no longer relevant for estimating benefits foregone because of past project delays. In particular, the 1990 study did not anticipate the regulatory and market factors that reduced the demand for coal shipments on the Ohio River, beginning in the 1990s. In the 2012 PACR, the Corps updated its benefit estimates based on a revised opening date of 2020, but they are of limited use for estimating benefits foregone for several reasons. For example, the analysis was based on assumptions about barge forecasts that may not represent the actual traffic that transited the locks and dams during past delays. According to Corps economists, the additional interest incurred during construction because of project delays is another type of benefit foregone because it represents the hypothetical return or “benefit” that could have been earned by investing the money in some other use. GAO found that the difference in interest estimated in 1990 and in the PACR to be about $400 million, which represents an estimate of the additional interest associated with such factors as changes in the project design that led to the construction delays and increased construction costs.
Employers sponsor two broad categories of pension plans: (1) defined benefit plans (DB)—in which employers generally maintain a fund to provide a specified level of monthly retirement income based on a formula specified in the plan—or (2) defined contribution plans (DC)—in which retirement income is based on employer and employee contributions and the performance of investments in individual employee accounts. Historically, DB benefits have typically been paid as a lifetime annuity (although lump sum options have increased in prevalence). Properly- funded DB plans can shield participants from numerous risks that participants face in DC plans, including eligible employees not enrolling in the plan; employees enrolling but contributing amounts likely to be insufficient, together with other sources of retirement income, to provide adequate overall retirement income; “leakage” of plan assets through withdrawals for purposes other than retirement; investment risks; and the “longevity risk” of outliving one’s savings. Participants in DC plans must save a sufficient amount through contributions and investment returns to meet future retirement needs, and must adequately manage both the “accumulation phase” of building up assets prior to retirement and the “decumulation phase” of spending down assets during retirement. On the other hand, while DB plans can shield participants from numerous risks, they can sometimes be less advantageous than DC plans for workers who change employers one or more times before retirement. There are several major DB-plan sectors in the United States: (1) “public plans,” which cover state and local government employees; (2) private sector single-employer plans; (3) private sector multiemployer plans, which generally cover union employees who work for participating employers in a particular trade or industry; and (4) nonqualified plans, which do not meet the applicable requirements for tax-qualification under the Internal Revenue Code and are typically maintained by employers primarily for the purpose of providing deferred compensation for select groups of management or highly-compensated employees. We will not discuss nonqualified plans in this report because sponsors of such plans typically do not have to satisfy laws and regulations requiring a minimum level of benefits or contributions. For most private sector single-employer and multiemployer pension plans, the Pension Benefit Guaranty Corporation (PBGC) insures plan benefits, up to certain statutory limits, under separate insurance programs for these two types of plans. PBGC was established under ERISA to insure the pension benefits of participants in qualified DB plans and pay participants up to the statutory limits, should their plans be terminated with insufficient funds or become insolvent. The statutory limits on insured benefits are much lower for multiemployer plans than for single- employer plans. In recent years, PBGC has faced large net accumulated deficits coupled with future risks posed by plan sponsors and their plans that have threatened its solvency. PBGC recently reported that while its single-employer program is likely to remain in net deficit over the next 10 years, some improvement is projected. However, there is significant variation in projected results under PBGC’s single- employer Pension Insurance Modeling System, with a worsening of the financial position of the single-employer program also possible. In contrast, the financial status of some multiemployer plans is deteriorating. PBGC reports that the insurance fund for its multiemployer program is more likely than not to be exhausted within the next 8 years, and 90 percent likely to be exhausted by 2025, which would result in benefits for participants in insolvent plans being cut to a small fraction of current guarantees. PBGC uses a discount rate assumption, discussed later, to determine the present value of projected future pension benefits to be paid to the participants of single-employer plans it has taken over, and the present value of projected financial assistance payments to multiemployer plans. In the public sector, DB plans still provide primary pension benefits for most state and local government workers. In contrast, DB plan coverage in the private sector has declined as these employers continued to shift away from sponsoring DB plans toward sponsoring DC plans. About 78 percent of state and local government employees participated in DB plans in 2013, compared with only 16 percent of private sector employees. few states offer DC or other types of plans as the primary retirement plan. For the same period, there were about 19 million state and local government employees and over 106 million private sector employees. See U.S. Department of Labor, U.S. Bureau of Labor Statistics, National Compensation Survey: Employee Benefits in the United States, March 2013 (Washington, D.C.: Sept. 2013). Social Security taxes on these earnings.pension benefits for such noncovered employees are generally higher than for employees covered by Social Security, and employee and employer contributions are generally higher as well. Also, unlike private sector employees with DB plans, state and local government employees generally contribute to their DB plans. As a result, employer-provided ERISA established minimum standards for pension plans in the private sector and, through the Internal Revenue Code, provides extensive rules on the federal tax effects of transactions associated with employee benefit plans. ERISA protects the interests of employee benefit plan participants and their beneficiaries by requiring the disclosure of financial and other information concerning the plan, establishing standards of conduct for plan fiduciaries, and providing for appropriate remedies and access to the federal courts, among other things. Since its enactment in 1974, ERISA has been amended many times, including by the Pension Protection Act of 2006 (PPA), which changed minimum funding standards for private sector single-employer defined benefit pension plans by, among other things, changing the measurement of a plan’s funding target (including the discount rate used) and shortening the period of time over which the funding target should be attained. Minimum funding standard provisions have since been further revised by subsequent legislation. PPA also included provisions requiring private sector multiemployer plans in poor financial shape to take action to improve their financial condition over the long term. The federal government has not imposed the same funding and reporting requirements on state and local government pension plans as it has on private sector pension plans. State and local government plans are specifically exempted from ERISA funding requirements, in part, because of the presumption that state and local governments can rely on their taxing power to pay for DB plan benefits. These plans are also not insured by the PBGC as private DB plans are. However, in order for participants to receive preferential tax treatment (that is, for contributions and investment earnings to be tax-deferred), state and local government pension plans must comply with certain requirements of the Internal Revenue Code. State and local governments also follow different standards than the private sector for financial reporting. The accounting standards for financial reporting by public and private sector pension plan sponsors are promulgated by two independent organizations. For the public sector, the GASB has been designated by the American Institute of Certified Public Accountants as the accounting standard-setter to establish generally accepted accounting principles for U.S. state and local governmental entities. GASB’s standards are not federal laws or regulations and GASB does not have enforcement authority. However, compliance with its standards is required through laws of some individual states and is integrated into the audit process, whereby auditors render opinions on the fair presentation of state and local governments’ financial statements in accordance with generally accepted accounting principles. For the private sector, the FASB has been designated by the American Institute of Certified Public Accountants as the accounting standard-setter to establish generally accepted accounting principles for nongovernmental entities. Those standards are officially recognized as “generally accepted” for the purposes of federal securities laws by the Securities and Exchange Commission (SEC), and companies registered with the SEC are required to comply with those standards in preparing financial statements filed with the SEC. In addition to the standards above, actuarial standards of practice are promulgated by the Actuarial Standards Board, whose mission is to identify what an actuary should consider, document, and disclose when performing an actuarial assignment. Actuaries work with plans to develop economic and demographic assumptions. For DB pension plans, the discount rate is used in converting projected future benefits into their “present value” and is an integral part of estimating a plan’s liabilities. A pension liability generally includes two pieces: (1) the present value of all projected future benefits for current retirees, as well as for former employees not yet retired but who have a vested right to a future pension, plus (2) the present value of a portion of the projected future benefits for current employees, based on their service to date (with each additional year of service adding to the liability, such that approximately the full cost of benefits is accrued when employees reach retirement). The increase in the liability that arises from an additional year of employee service is called the “normal cost,” which can also be thought of as the pension cost attributable to employees’ work in a single year. Both the liability and the normal cost depend on the discount rate, as they both represent the present value of some portion of future benefits. The higher the discount rate, the lower the plan’s estimate of its liability and normal cost (see fig. 1). In addition, the further into the future that the projected benefit payments occur, the more pronounced is the effect of the discount rate, because it is applied over a greater number of years. As a result, a pension liability for current workers is typically more sensitive to changes in the discount rate than is a pension liability for retirees. Methods for determining a plan’s discount rate can be categorized into two primary approaches—the assumed-return and bond-based approaches. The first approach—the “assumed-return approach”— bases the discount rate on a long-term assumed average rate of return on the pension plan’s assets (which includes expected long-term stock market returns to the extent plan assets are so invested, and which, in recent years, and as employed by U.S. public plan sponsors, often would produce discount rates between 7 and 8 percent). Under this approach, the discount rate depends on the allocation of plan assets. For example, a reallocation of plan assets into fewer bonds and more stocks can increase the discount rate and reduce the measurement of plan liabilities. Under this approach, the discount rate also depends on estimates of what future investment returns the plan will earn on its assets; more optimistic estimates produce higher discount rates and lower plan liabilities. The assumed-return approach is based in part on the premise that pension plans are long-term enterprises that can weather fluctuations in financial markets, and that the estimated long-term average cost of financing plan benefits, based on the plan’s asset allocation, provides the most relevant measure of plan costs. The second approach—the “bond-based approach”—uses a discount rate based on market prices for bonds, annuities, or other alternatives that are deemed to have certain characteristics similar to pension promises, instead of estimates of future returns. The bond-based approach is premised on the theory that pension benefits are “bond-like,” in that they constitute promises to make specific payments in the future, and should be similarly valued. Under this approach, the discount rate is independent of the allocation of plan assets. The relevant bond “quality” (e.g., AAA- rated, AA-rated, etc.) can depend on the specific purpose of the liability measurement, which can result in rates that vary considerably. There are at least five variations of bond-based approaches that are in use or have been proposed. Interest Rates on High-Quality Corporate Bonds: This method is typically used by private sector single-employer plan sponsors for financial reporting under FASB standards. Historical Averages of High-Quality Corporate Bond Interest Rates: This “smoothing” approach is allowed for funding purposes for private sector single-employer plan sponsors under amendments to ERISA and PPA, which allowed discount rates based on a 2-year average of high-quality corporate bond rates. This 2-year smoothing was lengthened to 25-year smoothing by the Moving Ahead for Progress in the 21st Century Act (MAP-21), tying discount rates to a 25-year historical average (see table 1 in the next section). The use of a 25-year historical average results in current discount rates that are significantly in excess of current or recent interest rates on high quality bonds. Risk-Free Interest Rates: Another variation is to use risk-free interest rates (e.g., Treasury rates). A panel commissioned by the Society of Actuaries recommended that public plans disclose an additional liability measurement using this method, and at least one public plan currently discloses such a supplemental measure. A liability based on risk-free interest rates can be thought of as approximating the amount of money that would be needed to come close to protecting the payment of future benefits from investment risk. Demographic risk would still remain, such as the risk of life expectancy improving faster than expected. Matching Bond Credit Quality with Estimated Riskiness of the Pension Promise: Under this variation, as advocated by some financial economists for certain purposes, as discussed later, the bond credit quality could be chosen to match the estimated riskiness of the pension promise. Annuity Settlement Rates: This fifth variation is the method used by PBGC for its financial reporting of its deficit.be considered a bond-based approach as it is based on estimated market prices for annuities, which are influenced by, and will vary This method can also with, market interest rates. A liability based on an annuity settlement rate is the estimated market value of the amount of money that is required to fully insure the payment of future benefits against both economic and demographic risks. As a result, a settlement liability can be significantly greater than a liability calculated using high-quality bond rates. PBGC officials stated that this often leads to unpleasant surprises when a plan terminates, whereby a plan that was thought by plan participants to be overfunded turns out to be underfunded. The remainder of this report focuses mainly on the discount rates used by plan sponsors and trustees. Because bond interest rates are currently at historic lows (see fig. 2), and because plans’ assumed returns have not declined commensurately, bond-based approaches today that use little or no smoothing are likely to produce discount rates that are much lower than current assumed returns. The discount rate approaches and regulatory structure governing pension plans in Canada, the Netherlands, and the United Kingdom differ in various ways from those in the United States. As in the United States, most Canadian defined benefit plans—both public and private—are prefunded, according to Canadian experts with whom we spoke. Additionally, they noted that most plans are regulated at the provincial level, although some plans, such as those of federally regulated employers such as banks, telecommunications companies, and inter- provincial transportation companies, are regulated by a separate federal regulator. Nonetheless, the regulatory principles are generally similar across all regulators, according to experts. There is no national pension insurance program in Canada. In the Netherlands, De Nederlandsche Bank (DNB) regulates pension discount rates. An official told us that there are no regulatory distinctions among public, private, or multiemployer defined benefit plans in the Netherlands. They also noted that pension plans in the Netherlands are separate legal entities from plan sponsors, and there is no pension insurance program in place. Benefit amounts can vary with plan investment performance and plan funded status. In the United Kingdom, private sector defined benefit plans are prefunded and public sector plans generally are not. The Pensions Regulator is the regulating entity for private pension plans and a national pension insurance program is administered by the Pension Protection Fund. Plans have trustees who are autonomous from the sponsoring employers. The trustees and employers negotiate in setting plan policies, with assumptions and approaches subject to a risk-based process of review by the Pensions Regulator. According to experts, the Pensions Regulator uses what it calls a Scheme Specific Funding framework for evaluating funding requirements. Discounting practices for DB pension plans in Canada, the Netherlands, and the United Kingdom are discussed later in this report, and a summary of these countries’ DB regulatory requirements and discounting approaches can be found in appendix IV. For financial reporting purposes, private sector plan sponsors in these countries often follow the accounting standards promulgated by the IASB. Plan sponsors in the United Kingdom will often follow the local U.K. accounting standards promulgated by the Financial Reporting Council (FRC) or the IASB standards. IASB and FRC standards take an approach to the discount rate that is broadly similar to that in FASB standards. Public and private sector DB pension plans are subject to different rules and guidance regarding discount rates. For purposes of both funding and financial reporting, public plan sponsors generally use an assumed-return approach, while private sector single-employer plan sponsors use a bond- based approach for financial reporting purposes, but currently are allowed a 25-year smoothing option that is generally in use for funding purposes. Private sector multiemployer plans generally use an assumed-return approach for funding purposes, but also calculate an additional liability measure under ERISA based on an average of Treasury bond rates, while standards related to discounting for accounting purposes are typically not applicable to participating employers in these plans. These various rules and guidance result in considerable variation in the discount rates that are currently in use. The result is discount rates that are generally highest for public plans and for funding private sector multiemployer plans, followed by discount rates for funding private sector single-employer plans (under the interest rate path of the past 25 years). The lowest discount rates among U.S. plans are for financial reporting by sponsors of private sector single-employer plans and for the additional liability calculated by multiemployer plans. Table 1 summarizes these laws, standards, and rules for different plan types. Different laws and standards also specify different actuarial cost methods and give different names to the resulting liability measures. See appendix II for more details. In addition, both FASB and GASB have differences in their requirements applicable to financial reporting by pension plan sponsors (and participating employers in the case of multiemployer plans) and financial reporting by the pension plans themselves. Under GASB standards, the discount rate requirements are the same for both plan sponsor and plan financial reporting. Under FASB standards, plan sponsors are required to discount using “settlement rates,”---which can be based on the discount rates implicit in the current prices of annuity contracts, such as PBGC’s rates, but can also be based on current high quality bond rates, which plan sponsors generally do--- while plans are required to discount using best-estimate assumed rates of return. With regard to U.S. financial reporting requirements, the focus of this report is on requirements applicable to plan sponsors and participating employers, not financial reporting by the plans themselves. Public plans and private sector multiemployer plans generally report higher funded ratios, and their liabilities generally appear lower, than those of comparable private sector single-employer plans because these plans currently use very different discount rate approaches, leading to potentially large differences in funded ratios and reported liabilities. This difference is because public plan sponsors’ and multiemployer plans’ discount rates are determined largely using an assumed-return approach, which generally produces higher discount rates. As such, this approach generally produces lower liabilities than variations of bond- based approaches with little or no smoothing (which often produces lower discount rates), as used by private sector single-employer plan sponsors for financial reporting purposes. For example, Mercer, a retirement industry consultant, estimated that at the end of 2013 an average private sector single-employer plan sponsor would have a discount rate of 4.88 percent for FASB reporting. According to the National Association of State Retirement Administrators, however, public plan sponsors assumed a return of 7.72 percent on average as of December 2013. At this difference in discount rates, the present value of a benefit payment due in 15 years for a private sector single-employer plan sponsor for financial reporting would be almost 50 percent higher than for a comparable public Some experts (including those on the GASB) view sector plan sponsor.differences between public sector and private sector single-employer discounting approaches as appropriate because they see public plans as going concerns that can best estimate their pension costs using very long-term assumed returns as their discount rate. There are other experts, however, who disagree with this viewpoint or see value in both types of measures. See the next section for a discussion of various considerations underlying different views on discount rate policy. Bond-based discount rates can vary considerably, and may not always result in significantly lower discount rates than assumed-returns. In practice, there are variations of the bond-based approach that can result in discount rates that do not, to varying degrees, reflect current or recent market rates. These approaches have been implemented or proposed in order to provide stability for funding or financial reporting purposes but can have the effect of obscuring any measure of a market value of the liability (i.e., a connection to current market prices). For funding purposes under ERISA as amended by the PPA, but prior to the MAP-21 amendments, private sector single-employer plan sponsors who elected to use 2-year smoothing of interest rates based on high-quality corporate bonds would have used, in December 2013, Treasury-prescribed discount rates of 1.28 percent for benefit payments due in less than 5 years, 4.05 percent for payments due between 5 and 20 years, and 5.07 percent for payments due in 20 years or more. This simplified three- segment yield curve was adjusted up by MAP-21, with its boundaries tied to 25-year smoothing, to minimum rates of 4.94 percent, 6.15 percent, and 6.76 percent respectively, for the same month (see app. II for more details). In contrast, PBGC interest rate factors at December 30, 2013 were 3.00 percent for benefit payments within the first 20 years and 3.31 percent for payments beyond 20 years. At these discount rates, the present value of a benefit payment due in 15 years for a private sector single-employer plan under ERISA (MAP-21) segment rates would be closer to the value determined under the average 7.72 percent assumed return used by public plans than to the annuity settlement rate used by PBGC. Table 2 summarizes the preceding findings with regard to public and private sector discount rates. In addition to the discount rate, the actuarial cost method used to allocate retirement costs among employees’ work years can affect the size of a pension plan’s liability. Public sector plan sponsors typically use actuarial cost methods that assign higher liabilities to younger workers as compared to the cost methods private sector single-employer plan sponsors use. As discussed in appendix II, the cost methods typically used by public plan sponsors tend to somewhat increase the liability relative to the cost methods used by private sector single-employer plan sponsors. However, this effect is often greatly offset by the effect of the differences in discount rates determined between the bond-based and assumed-return approaches. These different funding and financial reporting requirements for setting discount rates for different types of plans also result in differing amounts of discretion that plan sponsors can use in setting their discount rates. Of the GASB, ERISA, and FASB requirements with respect to discount rates, GASB standards and ERISA’s multiemployer funding standards leave the most room for judgment, because, for example, estimated long- term average rates of return on pension plan investments in equities are judgments rather than observable data, and such estimates can vary significantly even among experts. This stands in contrast to ERISA’s single-employer standards and FASB standards (for plan sponsors) which allow less discretion. Some experts said that the assumed-return approach could incentivize public plan sponsors to invest in riskier assets because doing so can increase the assumed-return discount rate, thereby lowering reported liabilities and reducing funding requirements. In addition, some experts said that some public plan sponsors have sometimes inverted the recommended practice of first determining plan asset allocation—based on an assessment of investment goals and the amount of risk that can be taken on—and then deriving a discount rate based on an assumed long- term average return for that mix of assets. Instead, these experts said that some plan sponsors have set a target discount rate and then asked the plan’s investment team to develop an asset allocation to support it. Other experts stated that this practice does not occur. In a related way, some experts said that the assumed-return approach has led some public plan sponsors to issue pension obligation bonds.bonds could help state and local governments improve plan funding, the increased capital into the pension fund is derived from the apparent arbitrage opportunity created for the plan sponsor by taking on more debt outside the plan. The use of an assumed-return discount rate allows the plan sponsor to capitalize on the difference between the assumed return on the invested assets and the interest rate on the pension obligation bonds, essentially taking credit for the assumed returns before actually achieving them. The use of pension obligation bonds effectively allows plan sponsors to invest on “margin,” or borrow money to invest in risky While issuing such assets. This strategy comes with increased risk and is only successful if the sponsor’s pension assets actually do appreciate at a higher rate than the rate at which the plan sponsor borrowed. Some experts told us that it is also possible that a plan’s discount rate approach could influence future benefit levels. At the most basic level, the cost of benefits typically will appear lower using an assumed-return discount rate than using a bond-based discount rate, perhaps leading to compensation packages that are weighted toward more retirement benefits or to larger overall compensation packages. Further, some experts expressed concern that sponsors of plans that have earned more than the assumed return, such as in a bull market, have given this extra return to participants as a benefit increase, but that benefits would not be cut at the same rate during periods of low returns. To the extent this occurs, it would mean that an assumed-return discount rate would need to be lowered, or the plan liability increased in some other manner, to reflect the fact that future bull-market gains would not be fully available to offset future bear market losses. On the other hand, many public plans have reduced some aspect of their benefit structure in recent years in response to low returns on assets. In contrast to the investment incentives that public plan sponsors (and multiemployer plans) may face, the use of a bond-based discount rate for private sector single-employer plan sponsors can create an incentive to invest in bonds to make pension contributions more predictable or financial reporting results less volatile. For plans using bond-based discount rates (with little or no smoothing), liability values will fluctuate with changes in market interest rates. A bond-based investment policy can be used so that plan asset values will move in tandem with liability values as interest rates fluctuate. The greater the match between a plan’s investment assets and the amount and timing of its projected benefit payments, the more stable the plan’s funded status will be. However, holding bonds means forgoing potentially higher returns from equities. Thus, the more that a plan matches assets to liabilities by purchasing similar-duration low-risk bonds, the more expensive the plan may become to fund, which may provide a countervailing disincentive to invest more in bonds. Additional incentive effects are discussed in the next section. For many of the experts we interviewed, the appropriate discount rate to use depends on the purpose of the measurement. Regardless of whether they believed the appropriate discount rate to use depends on the purpose of the measurement, all experts we interviewed pointed to various considerations that influenced their views on discount rate policy. Many of these experts supported reporting multiple liability measures and some said assumed-return rates may be too high. The discount rate used can vary depending on the purpose of the measurement. There are at least five key purposes for which one might determine a discounted value of future benefits: (1) determining the required or recommended amount that the plan sponsor should contribute into the plan; (2) reporting plan liabilities to shareholders, taxpayers, plan participants, or other stakeholders, such as for financial reporting; (3) determining the amount needed to terminate a plan, settle a portion of plan liabilities, or to guarantee or minimize risk on pensions earned to date; (4) expressing the value of participants’ benefits (for example, in putting a value on their total compensation); and (5) determining optional Several lump sum amounts payable to participants in lieu of an annuity.experts with whom we spoke also indicated that their views on the appropriateness of different rates for different purposes of the measurement vary between public and private plans. The discussions with these experts were focused on setting future policy and not necessarily related to laws, standards, and practices that currently apply to plans in the United States. As a plan will ultimately pay benefits out of contributions into the plan and investment earnings on those contributions, some experts said a measure of a plan’s liability based on an assumed return can be thought of as a best estimate of the assets a plan believes it needs to have on hand to fulfill its promises. Experts told us that an assumed-return approach can be useful in determining this amount, as well as for estimating a plan sponsor’s most likely stream of future contributions into the plan. Some experts referred to this measurement purpose as “funding” or “budgeting,” as distinct from “accounting” or “financial reporting.” For funding purposes, public plan sponsors typically calculate a liability using an assumed-return discount rate, but there are no federal laws that require them to do so. In contrast, for funding purposes, private sector single- employer plan sponsors must follow ERISA standards for discounting to determine their minimum required contribution. Under ERISA, private sector single-employer plan sponsors use a bond-based discount rate to determine a minimum required contribution, while private sector multiemployer plans generally employ an assumed-return approach to determine this required contribution. Private sector single-employer plans include most private sector plans and about three-quarters of private sector plan participants. Another purpose for using a discount rate is in calculating and then reporting liabilities to shareholders, taxpayers, plan participants, regulators, or other stakeholders, such as in annual funding notices, or financial or actuarial reports. For example, participants in private sector plans receive information on the health of their plan through the Annual Funding Notice, which reports plan funded status based on funding measures under ERISA. For single-employer plans, MAP-21 requires that this Annual Funding Notice report the plan’s funded status both before and after MAP-21’s 25-year smoothing of interest rates. The Annual Funding Notice can show a funded status that is higher than it would be on a PBGC basis under current market conditions. All publicly-traded companies follow FASB accounting standards for reporting pension liabilities to shareholders and other users, which allows investors to compare different companies’ pension liabilities along with other financial data. For this purpose, most private sector sponsors of single-employer plans use bond-based discount rates based on high-quality (AA-rated) bonds. In contrast, for public plans, the discount rate approach prescribed in GASB standards requires discounting that is closer to an assumed-return basis in most cases.approaches for financial reporting suggested that the bond quality should Some proponents of bond-based vary with the riskiness of the benefit promise. For example, a pension benefit promise that was deemed to be at risk—perhaps because of some combination of an underfunded plan and a weak plan sponsor—might be discounted at a B-rated bond rate, to reflect the risk of non-payment of the benefit promise, whereas a strong, well funded pension promise by a financially strong sponsor might be discounted at a AAA-rated bond rate. This would result in a weaker sponsor reporting a lower liability than a strong sponsor with a comparable plan. To determine the amount needed to terminate a plan or to guarantee pensions to date—a “solvency measure”—the discount rate, such as the interest rate factors used by PBGC, would typically be based on the price an insurance company would charge to take over the obligation. In a standard ERISA plan termination, the plan would purchase annuities from an insurance company and transfer the liability to it. also be used to determine how much it would cost to guarantee pensions at any given moment, even if the plan was not terminated. Solvency measures typically exceed the liability measure disclosed under financial reporting standards. As a result, a plan could be insolvent if it needed to terminate, even if it appeared fully funded on a financial reporting basis (or on an ERISA basis). For an ongoing plan, a liability could also be calculated using Treasury bond rates, as a measure of the plan assets that would be needed to minimize investment risk in the ongoing plan, while retaining demographic risk, without transferring the obligations to an insurance company. Plans can also offer participants lump sums that are based on IRS-published rates for high-quality corporate bonds. A plan sponsor, or both management and labor in a collective bargaining process, that wants to assess the value of retirement benefits as part of employees’ total compensation must decide how to discount future benefits to today’s dollars, among other assumptions. All proponents of a bond-based approach with whom we spoke advocated that approach for this purpose, so that pension benefits would be valued in a manner consistent with similar future financial promises (i.e., based on bonds with a similar level of risk of nonpayment). In contrast, most proponents of an assumed-return approach with whom we spoke advocate that approach for this purpose so that pension benefits would be valued in a manner consistent with a plan sponsor’s long-term budgeting estimates. Some plans offer a lump sum as an optional form of payment at retirement or termination of employment, as an ongoing plan feature. Some sponsors of plans that did not previously provide for a lump sum option have recently amended their plans to offer one-time lump sum payout options to retirees and other former employees as a settlement of the plan’s remaining pension obligation to those plan participants. Converting monthly annuity or lifetime benefit streams into a lump sum amount requires a discount rate, among other assumptions. The Internal Revenue Code requires that a lump sum offer be at least as large as that determined using bond-based discount rates (in particular, prescribed high-quality corporate bond yields, along with other prescribed assumptions). Regardless of whether they believed the appropriate discount rate to use depends on the purpose of the measurement, all experts we interviewed pointed to at least one among six considerations that influenced their views on discount rate policy. These considerations can present trade- offs in setting discount rate policy and can be grouped into issues related to cost and risks, fairness and sustainability, and transparency. (See table 3 for a summary of these considerations). In terms of costs and risks, some experts identify tradeoffs between two competing goals: having level and predictable costs versus being certain that plans will ultimately have sufficient funds to ensure benefit security for plan participants and minimizing risks to other stakeholders, including the entity sponsoring the plan, shareholders and PBGC in the case of a private sector plan, and taxpayers and beneficiaries of public services in the case of a public plan. Some experts also said that it could be useful to account for plan and sponsor characteristics in setting discount rates for funding purposes. Plan and sponsor characteristics could include the size of the plan relative to the size of the plan sponsor, the maturity of the plan, and the strength of the plan sponsor. In terms of issues of fairness and sustainability, experts disagreed on whether an assumed-return or bond- based approach to discounting would best ensure intergenerational equity for bearing the cost of these plans, and would best promote system sustainability. Additionally, experts who support the use of only the bond- based approach or both approaches identified transparency and comparability as important considerations for setting discount rate policy, but they disagreed as to whether these considerations suggested using an assumed-return or bond-based discount rate. Lastly, many experts cited financial economic theory as an important consideration in setting discount rate policy based on market valuation principles. Other experts argued that this theory is not relevant to public plans because as “going- concerns” with very long time horizons, they do not have significant risk of plan termination; according to these experts, discounting based on long- term assumed-return expectations is a best estimate of long-term plan costs for public plans. Level and predictability of costs refers to the level of certainty a sponsor has that its pension costs will be affordable and stable from year to year. Reported costs based on bond rates will typically be higher than reported costs based on assumed rates of return, but depending on asset allocation and amortization periods. Experts also noted that liabilities based on point-in-time bond-market rates will fluctuate as interest rates rise and fall, causing costs to be unpredictable compared to costs based on assumed long-term returns, which tend to be more stable than bond interest rates because they are based on very long-term expectations. Some of these experts suggested smoothing discount rates by averaging bond rates over a number of years in order to make costs more predictable, as used by most private single-employer plan sponsors under ERISA provisions. Other experts preferred that, if smoothing were to be done, that costs be smoothed directly rather than smoothing discount rates. For funding purposes, private sector single-employer plan sponsors generally use a smoothing approach to discount rates, but for financial reporting, they do not. Benefit security and risks to stakeholders are the risks that a plan will be unable to pay promised benefits to plan participants or will present serious financial challenges to other stakeholders, including the entity sponsoring the plan, shareholders and PBGC, and PBGC premium payers in the case of a private sector plan, and taxpayers and beneficiaries of public services in the case of a public plan. In the private sector, some companies fail and sometimes entire industries decline. While participants in single-employer plans have PBGC protection, it is limited, and participants sometimes lose a portion of their benefits. Participants in multiemployer plans face greater risks: as noted earlier, their PBGC benefit limits are much lower, and PBGC projects that its multiemployer insurance program is itself likely to become insolvent within the next decade without further action. While states cannot and local governments usually do not go out of business and have the option to raise tax revenue or reduce services to pay for underfunded benefits, some local governments have entered into bankruptcy and some participants in public plans have lost some current benefits or anticipated future growth in benefits. Benefit losses can be particularly challenging for those public sector participants who are not covered by Social Security. Some experts told us that using an assumed-return discount rate could obscure the risk that a plan could ultimately be unable to pay for benefits, and/or a sponsor may be unable or unwilling to make necessary additional contributions, even though the plan might appear fully funded on a given date using an assumed return discount rate. For example, a plan could be insolvent if it needed to terminate, even if it was fully funded using an assumed-return discount rate or a bond-based rate with significant smoothing, because the cost to actually buy out the pension benefits or transfer them to another party could be much higher than the liability using an assumed return. Another risk to using the assumed-return approach cited by some experts is that a plan’s assets could fail to grow at the assumed return, which would require higher-than-expected contributions or future reductions in benefits. The associated risks to participants would depend on how well the sponsor could use other financial resources to make up funding shortfalls and pay benefits. Due to these risks, according to some experts, using a discount rate that is lower than an assumed- return rate—whether a bond-based rate (with little or no smoothing) or something in between an assumed-return rate and a bond-based rate—can be viewed as a lower-risk approach than a pure assumed- return approach. Specifically, because the discount rates would be more conservative, sponsors would have to put more money into the plan to be fully funded, which would provide a cushion against the possibility of actual returns falling short of those assumed and being inadequate to pay for future benefits. Plan and plan sponsor characteristics, such as the size of the plan relative to the size of the plan sponsor, the maturity of the plan, and the strength of the plan sponsor may be key factors in determining an appropriate discount rate, particularly for funding purposes. Two supporters of the assumed-return approach for some purposes said that weak plan sponsors with uncertain futures might need to be more conservative in setting a discount rate because the sponsor might not be able to make up the difference (through higher future contributions) if plan investments perform poorly. Based on interviews with experts, we identified the following key plan and sponsor characteristics to consider in setting the discount rate: 1. The size of the plan relative to the size of the plan sponsor, since a small sponsor with a large plan may be less able to cope with assumed returns that fail to materialize. The size of a plan sponsor could be measured by metrics such as revenue or market capitalization for a corporation or revenue or tax base for a state or local government. 2. The maturity of the plan, since an aging plan with few new participants will wind down over a shorter time horizon. Such a plan will have less time to recover if it does not meet investment expectations. 3. The strength of the plan sponsor, since a sponsor with strong revenue projections is better positioned to take risks with funding or investment policy or with its discount rate approach. These characteristics can change over time. Indeed, it is not uncommon for a plan’s demographics to mature over time, for a plan to grow in size relative to the size of the plan sponsor over time, or for once-healthy plan sponsors to become financially strained. Related to this is that risks to plans and plan sponsors are “correlated,” meaning that a market downturn may both decrease the value of plan assets and weaken the financial health of the plan sponsor at the same time. These risks are also considerations in setting a discount rate. Intergenerational equity is the issue of whether current and future generations bear fair amounts of cost and risk. In general, a principle of public finance is that each generation should pay for the services it receives, and that borrowing should be for capital projects that benefit people over a long period of time. Experts disagreed on how to best design the discount rate to achieve the goal of intergenerational equity. Some experts stated that using an assumed-return approach passes uncompensated risk to future generations. Others had an opposing view that using a bond-based approach charges current generations in excess of a best estimate of the funds that would ultimately be needed for future pension benefits, which would pass surplus assets to future generations. System sustainability refers to whether public or private sponsors will want to continue to provide DB pension plans under one or the other discount rate regime. Several experts attributed historical declines in private sector DB coverage to bond-based discount rate policies that created too much volatility in reported DB liabilities, along with increases in reported costs. These experts noted that DB plans are often replaced by DC plans that shift risks onto participants, who, in the view of two experts, are less equipped to bear them than are plan sponsors. Another expert noted an incongruity between the fact that bond-based discount rates create an incentive for DB plans to move out of the stock market and into bonds, whereas the standard recommendation for DC participants is to invest in a mix of stocks and bonds (with the particular mix varying by age). Some experts argued that DB plans, particularly public plans, can and should take on some amount of investment risk, which could reduce long-term costs. Others said that the discount rate should be an assumed return to be consistent with plan investment practices. Other experts argued the opposite, that assumed-return discount rates lead to poor risk management practices—such as taking on too much investment risk or increasing benefits when plans appear overfunded. In this view, such practices could lead to funding shortfalls and crises that undermine system sustainability. One such expert argued that DB pension plans should be operated more like insurance companies in their risk management practices. Transparency and comparability refers to providing sufficient information for users of financial data to understand a pension plan’s financial position and to make comparisons across plans. A number of experts emphasized transparency or comparability considerations in setting the discount rate and many supported the reporting of multiple measures of liability using different discount rates. While some proponents of an assumed-return approach stated that multiple measures of liability would be confusing for stakeholders in the public plan environment, other experts were often concerned that one measure of liability reported at a single discount rate could not provide enough information for pension plan stakeholders to make informed decisions. For example, one expert compared using a single discount rate to driving across the country with only a single gauge—fuel, speed, or temperature. Nearly half of the experts we interviewed supported the use of multiple measures for valuing pension plan obligations. Some experts saw value in reporting a bond-based liability in addition to an assumed-return liability because of various concerns about asset allocation. To the extent the same actuarial cost method is used, the difference between the two liabilities would represent: (1) a measure of the long-term reduction in cost that a plan thinks it can achieve through investments that outperform a low-risk rate and (2) the amount of investment risk a plan takes on relative to a low-risk funding target. Additionally, many experts stated that reporting multiple measures of liabilities would be useful in providing transparency. Some experts felt that more complete information for all key stakeholders would be an improvement over currently available information, while others said that reporting liabilities based on multiple discount rates would provide fuller transparency into a plan’s finances than using a single rate. Some experts also took the view that public plans providing liabilities at both a bond-based and assumed-return discount rate could provide a broader range of information to plans and employers to guide plan policies, and could potentially provide a useful check on the assumed-return measurement. At least one large public plan voluntarily provides multiple measures of liability using different discounting approaches (as well as multiple actuarial cost methods). The plan discloses a number of estimates of liability based on low-risk bond rates as well as estimates of liability using assumed returns. It also provides a narrative explaining what the different numbers represent. As noted earlier, while multiemployer plans generally use an assumed- return approach for funding purposes, they also calculate an additional liability measure under ERISA based on a 4-year weighted average of Treasury bond rates. Experts had differing views on the significance of this “current liability” calculation. In contrast to experts favoring multiple measures, nearly a quarter of the experts we interviewed argued that only a bond-based approach should be used to value plan obligations while nearly a third of the experts we interviewed favored use of only the assumed-return approach. For example, some advocates of each of the assumed-return and bond- based approaches did not see value in the other approach, and as noted earlier, some even saw potential damage. Some experts who saw the bond-based approach as the only correct approach for all purposes, argued that including a liability based on an assumed-return approach is incorrect based on economic theory and could result in lower contributions, higher benefits, or riskier investment strategies. Some advocates of using only the assumed-return approach argued that including a liability based on a bond-based approach is irrelevant for public plans. One expert noted that requiring public plans to report a bond-based measure could result in pressure to fund to this much higher measure, and two experts said requiring state and local governments to fund their plans using a bond-based measure could put pressure on them to change their pension plans from DB to DC. Some of these experts felt that more extensive risk analysis and disclosure, using techniques such as stochastic modeling and stress testing, would provide more useful and relevant information than the addition of a bond-base liability measure. Nevertheless, some of the experts who principally advocate for one particular approach also said that they could see value in multiple measures. Some experts who principally support a bond-based approach thought that if a plan were trying to earn returns in excess of low-risk bonds, reporting a funding target based on the assumed-return measure could be worthwhile. Some experts did not think that plans should attempt to earn a risk premium, and therefore, their assumed rate of return would be the same as the bond-based rate, since the plan would only invest in low-risk bonds. Some advocates of the assumed-return approach for at least some purposes said that reporting multiple measures could provide informational value. Figure 3 illustrates some of these lines of argument. According to some experts, even within the assumed-return discount rate framework, the returns assumed by public plans have been too high. More specifically the assumed return among most public plans surveyed in 2013 was between 7.5 to 8 percent.return discount rates are currently too optimistic, and a few said it would be difficult to achieve such returns given current market conditions. Further, some experts cited current interest rates, which are historically very low, as indicative of lower expectations for future returns. In contrast, two experts were more optimistic about future returns, including one expert who cited an analysis of price-to-earnings ratios on stocks as indicating potential for strong long-term future returns. Two experts noted that public plans’ assumed returns have been declining. One of these experts said this decline indicates that the system is making necessary self-corrections. The other expert viewed the discount rate reductions as too small and too gradual. Some experts we spoke to cited the historical returns of assets in a typical pension plan portfolio as evidence for the appropriateness of assuming a rate of return of around 8 percent. Some experts have cited, for example, the average level of historical returns over particular periods or the distribution of returns over rolling long-term historical periods, such as all possible 30-year periods for which there is good return data. However, by themselves, historical returns have limited usefulness in resolving disagreements over the appropriate discount rate. We modeled returns on typical pension portfolios over past periods, but identified numerous challenges with using historical data to generate or support an assumed-return assumption. First, analysis of returns on overlapping rolling historical periods has significant statistical limitations. Second, historical returns vary with the time period used in the analysis. Furthermore, future return expectations will depend in part on current economic variables that may not be consistent with any particular historical time period. Third, actual returns for any particular plan would also depend on plan characteristics and cash flows. Lastly, investment returns and plan benefit levels are not independent variables. Details of our analysis, and its limitations, can be found in appendix III. As discussed, a plan has an unfunded liability or is underfunded when its liabilities are greater than assets, so that the funded ratio is less than 100 percent. The amount of the unfunded liability is equal to the excess of liabilities over assets. conservative (i.e., lower) discount rate at or close to government bond yields often used for benefits of retired workers as compared to some assumed return in excess of high-quality bond yields used for current workers. The precise discount rate that can be reasonably justified by a plan depends on the strength of its sponsor. The regulator uses a risk- based approach that considers plan and sponsor characteristics to determine the reasonability of the discount rate and other plan assumptions. Canadian experts said that private sector Canadian plans use two liability measurements to determine minimum required contributions: (1) a solvency-liability measurement based on an assumption of plan termination, using bond-based discount rates, and (2) a going-concern liability measurement generally based on an assumed return (and typically with projections of future salary increases). The minimum contribution requirement is based on the larger of two different “amortization” calculations, one to pay down the unfunded solvency liability, the other to pay down the unfunded going-concern liability. The two measurements reflect the dual goals of solvency and long-term returns. The required solvency measure reflects, in part, the absence of a pension insurance program. The solvency measure generally consists of two parts: an amount for plan participants who would be assumed to take a lump sum upon plan termination, and an amount for plan participants who would be assumed to take an annuity upon plan termination. Lump sum values are calculated in accordance with Canadian Institute of Actuaries standards, which specify discount rates based on a formula tied to Canadian government bond rates plus a spread. For participants assumed to take an annuity, the solvency measure reflects the market prices insurers charge for immediate and deferred annuities. As noted earlier, using annuity prices can be considered a bond-based approach since such prices are influenced by, and will vary with, market interest rates. These annuity discount rates ranged from about 3.6 to 4.0 percent as of December 2013. Given the recent low interest-rate environment in Canada, private plans have generally had to fund to the more conservative solvency calculation. Experts also told us that a number of Canadian regulators have extended temporary solvency funding relief to some private sector single-employer plans following low valuations of their asset portfolios resulting from the 2008 market decline. In contrast, experts told us that most Canadian public plans and some multiemployer plans are generally exempted from the solvency assessment for funding or have been granted temporary solvency funding relief. Experts told us that because these plans are considered going- concerns, they are allowed to make contributions based solely on an assumed-return discount rate. This is similar in concept to practices for such plans in the United States, though the actual levels of assumed- return assumptions differ between the two countries, as discussed later in this section.require that a (bond-based) solvency liability be calculated and provided However, Canadian Institute of Actuaries standards by all plans, including public and multiemployer plans. An expert told us that this Canadian Institute of Actuaries requirement is not a public disclosure requirement; rather the information is provided only to plan sponsors, plan members, and regulators. Nonetheless, it stands in contrast to U.S. practice, where such a bond-based measure of liability is generally not provided by public plans. In the Netherlands, plan liabilities are measured using a bond-based approach. Benefits projected to be paid within the next 20 years are discounted using a 3-month average of a market interest rate curve. For benefits projected to be paid beyond 20 years, rates are extrapolated from the market interest rate curve to approach a predetermined rate, set at a fixed rate of 4.2 percent by an independent commission and introduced by De Nederlandsche Bank (DNB) in September 2012. A Dutch official told us that an independent commission has recently issued an advisory on the determination of this rate. In the future, the fixed level of 4.2 percent will be replaced by a 10-year moving average of the 20- year forward rate. An official noted that the Netherlands bases a plan’s funding target on the riskiness of the plan’s asset allocation. Plans are subject to a base funding target of 105 percent of the plan’s liability, which protects nominal accrued benefits, and a risk-adjusted target based on the riskiness of a plan’s asset allocation. Plans must fund to these risk-adjusted funding targets, which increase as a plan’s asset allocation gets riskier in order to provide a buffer or provide a financial cushion to protect against investment risk.where, under the assumed-return approach used by U.S. public plan sponsors and private sector multiemployer plans, the funding target (which is the liability) decreases as a plan’s asset allocation gets riskier. This is in contrast to the dynamic in the United States For determining minimum required contributions, plans may use either market interest rates, a 10-year moving average of market interest rates, or assumed returns. The option to use an average of market interest rates or assumed returns provides plans with some ability to avoid sharp fluctuations in minimum required contributions. However, the funding target would still be the risk-adjusted target based on the bond-based liability. For future projections of assets and liabilities, plans may use assumed returns. Similarly, plans that become underfunded must submit a recovery plan to the regulator but are allowed to use an assumed return to project their ability to close the funding deficit. However, in developing assumed returns, the maximum expectations that can be used are regulated. Currently, the maximum acceptable assumed return on the equity portion of the portfolio, as established by an independent commission as of December 2013, is 7 percent (the overall assumed return would also reflect the other asset classes in the portfolio). The recovery plan details specific measures that will enable a plan to return to fully-funded status within the allotted time. U.K. experts told us that under the U.K.’s Scheme Specific Funding framework, discount rates used by private plans for funding purposes are plan-specific and may incorporate elements of either or both of the bond- based and assumed-return approaches. In setting their discount rate or rates, plans can choose to apply the bond-based, assumed-return, or a combination of approaches, which is then subject to a risk-based regulatory review by the regulator. urges plans to consider the ability of the sponsor to assume risks of plan underfunding resulting from their discount rate and other plan assumptions. The weaker the sponsor relative to the plan, the more prudent should be the plan’s strategy and approach to the discount rate (and other assumptions). A weak sponsor may find it prudent to take less risk than a strong sponsor and use a discount rate that assumes lower returns. The relevant regulation states that “the rates of interest used to discount future payments of benefits must be chosen prudently, taking into account either or both–(i) the yield on assets held by the scheme to fund future benefits and the anticipated future investment returns, and (ii) the market redemption yields on government or other high- quality bonds.” other plan assumptions to determine if any appear to be too high or inappropriate given plan risks and sponsor strength. The regulator cautions plan trustees in published guidance to regularly assess sponsor strength because it may fluctuate significantly over relatively short periods of time. As part of its evaluation, the regulator also compares the size of the plan relative to the size of the plan sponsor. The regulator then conducts risk-based assessments to determine which plans may require additional scrutiny. Plans are required to be fully funded or they must set up a recovery plan, which guides funding decisions until the deficit is eliminated and is overseen by the regulator. According to an official from the U.K. Pensions Regulator, plans operating under a recovery plan may assume a higher return over the recovery period than the discount rate used to calculate the plan’s liability, provided that the recovery plan return assumption is justified by the investment strategy. The same official also told us that about 75 percent of plans were in recovery status as of June 2013. U.K. discount rates for funding purposes frequently differ between the retired and current worker portions of the plan populations. The projected benefits of retired plan participants are frequently discounted largely with reference to U.K. government bond rates, known as “gilts,” and to corporate bond rates. The projected benefits of current workers (and deferred members) are frequently discounted at gilt rates plus 2 to 3 percent for the period up to retirement. Generally, this practice acknowledges that the benefits of retirees should be discounted at a more conservative rate than the benefits of current workers, for whom more time is available to make up for any adverse plan experience, according to officials. As discussed, the precise discount rate used—be it based on government bond yields or varying levels of assumed returns in excess of bond yields—is plan-specific and depends on the strength of the sponsor, subject to a risk-based review by the regulator. Currently, the net result of this plan-specific approach is discount rates of gilt rates plus 0.8 to 1.3 percent. For plans in recovery, the average overall discount rate has ranged in recent years from 4.3 to 5.7 percent. As noted earlier, public plans in the United Kingdom are generally financed on a pay-as-you-go basis, with plan benefits paid out of tax revenue. Public plan sponsors make contributions to a notional pension account that are calculated based on a discounted measure of the plan’s liabilities. The discount rate used for this purpose is 3 percent above the U.K. Consumer Prices Index. When determining their liabilities, U.S. public plans generally use higher discount rates than plans use in Canada, the Netherlands, and the United Kingdom. In the United States, it is common for public plans to use a 7.5 to 8 percent long-term assumed rate of return. Experts told us that Canadian public plans generally use funding discount rates, using the assumed-return approach, of about 6 percent or lower, and that Canadian private plans use similar assumed-return rates for their going-concern valuations, and even lower rates—under current market conditions—for their solvency valuations. According to a Dutch official, the funding discount rates used in the Netherlands, using a bond-based approach, depend on the duration of plan liabilities and can fluctuate with the market but cannot currently exceed 4.2 percent, unless amended by the Dutch independent commission. In the United Kingdom, funding discount rates used by private plans in recovery have most recently been about 4.3 percent, with the average excess return assumed over conventional 20 year gilts at about 1 percent. In addition, discount rates for financial reporting purposes under International Accounting Standards Board (IASB) and Financial Reporting Council (FRC) standards (and FASB in the U.S.) are all bond-based and lower than U.S. public plan discount rates. Some of the differences in discount rates between the United States and these countries are accounted for by differing approaches to determining these rates. Bond-based discount rates generally will be lower than assumed-return discount rates under current and most market conditions. U.S. public plans use an assumed-return approach for funding and accounting purposes. In contrast, the Dutch discount rate, one of the two Canadian funding measures, and the IASB, FRC, and FASB discount rates are all bond-based. However, in those cases where these other countries use assumed returns, or some allowance for assumed returns—for example, one of the two Canadian measures, the rate for Dutch recovery plans, and the U.K.’s plan-specific approach—these assumed returns tend to be lower than assumed returns currently used by U.S. public plans. One potential explanation for differences in the discount rates is the greater degree of government oversight in Canada, the Netherlands, and the United Kingdom, where experts said regulators routinely scrutinize discount rates. Unlike for public plans in the United States, Canadian pension regulators’ authority to reject an actuarial report allows them to implicitly set the boundaries for reasonable assumptions. One expert stated that Canadian provincial regulators’ scrutiny “sets the tone” even for plans that are not subject to solvency measurements. Another expert said that in some jurisdictions, the regulator explicitly tells plans the acceptable range of discount rates to use. In addition, another expert told us that while Canadian Institute of Actuaries standards state that assumed returns should be best estimates unless otherwise required by the circumstances of the calculation, many regulators have sent notices to plans under their jurisdiction that margins for adverse deviation are needed. The overall acceptable net assumption tends to vary across provincial and federal regulators but, as discussed, is generally not higher than about 6 percent, under recent and current conditions. In the Netherlands, De Nederlandsche Bank’s use of prescribed bond-based discount rates obviates the need for explicit scrutiny of the discount rate assumption. However, plans are allowed to make assumed-return assumptions for recovery plans, and for this purpose an independent commission sets a ceiling on the maximum acceptable assumed return on the equity portion of plan assets. In the United Kingdom, the Pensions Regulator has legal powers to ensure that the discount rate and other plan assumptions are prudent given plan risks and sponsor strength. Differences in discount rates also arise from variations in the regulatory framework of each country, which reflect different views among governments and regulators on the most appropriate way to protect DB pension benefits for plans under their jurisdiction. Experts told us that under the Canadian two-measurement funding standard, private plans have less incentive to be overaggressive with their assumed-return assumption used for the going-concern measurement because they must generally fund their plan using a bond-based solvency measurement, which is currently the higher of the two measures. The Netherlands’ adherence to market interest valuation of accrued benefits through use of a bond-based approach to discounting for all plans is the most conservative among the countries we studied and, consequently, results in generally the lowest discount rates. As for the United Kingdom, the discretion to determine a discount rate approach under the Scheme Specific Funding framework necessitates negotiation among plan sponsors, trustees, and advisors, and may involve the regulator. This process facilitates a system of checks and balances that help to ensure that reasonable plan assumptions, including the discount rate, are used, experts said. Although our report illustrates the differences of opinion over pension discount rates, we found one significant area where there is some, but not universal, room for agreement. Specifically, many experts supported providing multiple measures of liabilities for different purposes to provide a more complete picture of pension plan finances. The practices of selected foreign countries—notably, Canada, the Netherlands, and the United Kingdom—may provide insight into ways that other pension systems discount liabilities, applying a variety of approaches to discounting, with significant government oversight, and generally using lower discount rates than U.S. assumed returns. In general, as in many aspects of pension plan finances, additional transparency and information about discount rates and their impact can be useful. There may be value in providing multiple measures of liability and cost, using both assumed-return and bond-based discount rates— carefully labeled to describe their purpose (e.g., with some measures, such as funding targets, not even necessarily labeled “liabilities”)—and with explanations of what these measures do and do not represent. The measurements resulting from these different discount rate approaches can ultimately improve the understanding, management, and governance of the finances of pension plans. In short, there may be value in having multiple liability measures to arrive at funding, benefit, and investment policies that will better balance risks and rewards to plan participants and all other stakeholders. Despite the challenges that many plans currently face, traditional DB plans in the public and private sector continue to play an important role in American retirement security. This is especially true in the public sector where many current workers and retirees do not participate in Social Security and may rely on these pensions as their primary source of retirement income. Policy options to address these plans’ challenges may be addressed by fostering the use of appropriate liability measurements and discount rate assumptions and increased transparency concerning their financial health. However, any such options should also be sensitive to the crucial need to ensure that benefits remain adequate to current and future retirees and their families. We provided officials from the Department of the Treasury and the Pension Benefit Guaranty Corporation with a draft of this report. They provided technical comments that we incorporated, as appropriate. In addition, we provided officials from the Financial Accounting Standards Board and the Governmental Accounting Standards Board with a draft of this report. They provided technical comments that we incorporated, as appropriate. We also provided select experts and officials from the countries we reviewed with portions of the draft report that addressed aspects of the pension funds in their jurisdictions. We incorporated their technical comments, as appropriate, as well. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of this report to appropriate congressional committees, the Secretary of the Treasury, the Director of the Pension Benefit Guaranty Corporation, and other interested parties. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you have any questions about this report, please contact Charles Jeszeck at (202) 512-7215 or [email protected] or Frank Todisco at (202) 512-2700 or [email protected]. Mr. Todisco meets the qualification standards of the American Academy of Actuaries to address the actuarial issues contained in this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are found in appendix V. To analyze differences of opinion concerning discount rates for pension plan valuations and funding, GAO examined (1) the significance of the differences in discounting approaches used by public versus private sector pension plans; (2) the purposes for measuring the value of a plan’s future benefits and key considerations for determining plan discount rate policy; (3) the approaches select countries have taken to choose discount rates. This appendix provides an account of the information and methodology we used to answer these questions. To address our objectives, we spoke with experts, including actuaries, economists, and other pension experts, from a variety of organizations and constituencies who represent diverse points of views regarding discount rates. These experts’ opinions cover a wide range of views on the appropriate way to set discount rates. We examined relevant literature on pension discount rates. We also reviewed relevant provisions in the Internal Revenue Code and the Employee Retirement Income Security Act, as amended; relevant federal regulations; relevant pension accounting standards issued by the Governmental Accounting Standards Board, Financial Accounting Standards Board, and the International Accounting Standards Board; and relevant actuarial standards of practice issued by the Actuarial Standards Board. For our analysis of historical returns and their implications, we spoke to experts and reviewed historical data on bond and stock returns as tracked using historical data from 1926 to 2012 in the Ibbotson Stocks, Bonds, Bills, and Inflation Historical Yearbook. We calculated average annual time-weighted geometric and arithmetic returns for various asset allocations and over various time periods within the 1926 to 2012 period. We also developed two stylized pension plans—a growing plan and a maturing plan—for which we calculated dollar-weighted returns over each of the three consecutive 29-year periods from 1926 to 2012. A growing plan is characterized by contributions into the plan exceeding benefit payments out of the plan. The ratio of cash in to cash out is set at 10 to 1 for the entire 29-year period. The ratio of contributions relative to total plan assets starts at about 40 percent in year 1 and declines to about 2 percent in year 29. A maturing plan is characterized by decreasing contributions relative to benefit payments, with benefit payments beginning to outpace contributions during the middle years of the 29-year period and continuing to increase relative to contributions for the remainder of the analysis period. The ratio of cash in to cash out starts at 3 to 1 in year 1 and ends at 0.65 in year 29. The cash in to cash out ratios for the intervening years (from 2 through 28) are determined through linear interpolation. The ratio of contributions relative to total plan assets start at about 43 percent in year 1 and declines to between 6 and 14 percent in year 29, depending on the period. These plans were provided as illustrative examples of how the dollar-weighted return of a particular plan can differ from a time-weighted return. To examine other countries’ approaches, we asked experts to identify countries with significant defined benefit systems and active controversies with regard to discount rates. Ultimately, we chose to examine Canada, the Netherlands, and the United Kingdom. These countries are not meant to be a representative sample of international practice; rather, they represent countries with contrasting approaches to discounting and ongoing discussions about the appropriate rate of discount. We spoke to experts in these countries and reviewed publicly-available documents. We did not conduct an independent legal analysis to verify the information provided about the laws, regulations, or policies of the foreign countries selected for this study. Instead, we relied on appropriate secondary sources, interviews with relevant officials, and other sources to summarize each country’s approach to discounting pension liabilities. We also provided select experts and officials from the countries we reviewed with portions of the draft report that addressed aspects of the pension funds in their jurisdictions. We incorporated their technical comments, as appropriate, as well. As discussed in the background, discount rate guidelines and practices for U.S. DB pension plans and plan sponsors differ for funding and financial reporting purposes, and for different plan types: public sector plans, private sector single-employer plans, and private sector multiemployer plans. As discussed, for both funding and financial reporting, sponsors of private sector single-employer plans generally use a bond-based approach, while sponsors of public plans generally use an assumed-return approach; private sector multiemployer plans generally use an assumed-return approach for funding, but also calculate an additional liability measure under ERISA based on an average of Treasury bond rates, while participating employers usually do not have to report a liability for financial reporting. However, the different plan sectors have unique guidelines and practices for arriving at a final discount rate for different purposes. Multiemployer plans have a choice of actuarial cost methods under ERISA for determining the accrued liability on which minimum required contributions are primarily based. Multiemployer plans also disclose, on Schedule MB of Form 5500, two liability measures based on the unit credit actuarial cost method: an accrued liability using the plan’s assumed return discount rate, and a measure called the current liability using a discount rate based on a 4-year weighted average of 30-year Treasury rates. use the entry age actuarial cost method. Using this method, a worker’s service and salary are both projected to retirement to estimate a projected benefit. The cost of this benefit is allocated over the worker’s entire service (both past and projected future) as a level percentage of his or her salary (for plans whose benefit formula is tied to salary levels). The accrued liability is the value of these allocated costs accumulated up to the point of the worker’s service to date. The resulting liability measure is simply called the total pension liability. For active workers, holding actuarial assumptions constant, an entry age normal accrued liability (GASB method) will typically be somewhat higher than a projected unit credit accrued liability (FASB method), which in turn will typically be somewhat higher (for benefit formulas tied to salary levels) than a unit credit accrued liability (ERISA method). For plan participants who are already retired or terminated employment, these three methods produce the same liability. In addition, the different funding and financial reporting requirements for setting discount rates for different types of plans also result in differing amounts of discretion that plan sponsors can use in setting their discount rates. Of the GASB, ERISA, and FASB requirements with respect to discount rates, GASB standards and ERISA’s multiemployer funding standards leave the most room for judgment, because, for example, estimated long-term average rates of return on pension plan investments in equities are judgments rather than observable data, and such estimates can vary significantly even among experts. This stands in contrast to ERISA’s single-employer standards and FASB standards (for plan sponsors) which allow less discretion. Table 5 summarizes differences in the laws, standards, and practices that govern discount rate approaches across plan types in the United States. Recently revised GASB standards prescribe a “blended approach” to determining pension discount rates for financial reporting by public plan sponsors, with implementation required by fiscal years beginning after June 15, 2014. Under the new GASB standard, plan sponsors would use an assumed-return approach to the extent they project that current assets, assumed returns, and future contributions for current members will be sufficient to provide for benefits; for any projected shortfalls, public plan sponsors would use 20-year, tax-exempt general obligation municipal bond interest rates with an average rating of AA/Aa or higher. Thus, for some plans the composite discount rate will be a hybrid of the assumed-return approach and bond-based approach. Industry experts have indicated that, on average, the composite discount rate is likely to be closer to the assumed-return rate for two reasons: many plans have contribution policies which, combined with current plan assets, are likely to be projected to cover projected benefit payments, so that the blended discount rate will be the same as the assumed-return discount rate; and for those plans where there is a projected insufficiency, the bond-based approach would only apply to a portion of plan liabilities. With regard to the rates they use to discount benefits, single-employer sponsors are generally required to use a bond-based approach to determine the minimum required contribution. However, within this framework, these sponsors have options which can result in measurements of plan liabilities that may not be closely tied to current market conditions. Plan sponsors are given the option of using a full yield curve approach, which matches projected benefit payments to high- quality corporate bond interest rates averaged over a current month, so that under this option, the measurement of plan liabilities would be tied to current or recent market conditions. A plan choosing this approach would discount a benefit payment due in 10 years at the yield curve rate as published by Treasury for year 10. However, single-employer plan sponsors may also elect to discount using a simplified three-segment yield curve published by Treasury, with the three different segment interest rates applicable to benefit payments due in less than 5 years, 5 to 20 years, and 20 years or more. These segment rates are based on a 2- year average of bond rates and also cannot be higher or lower than maximum and minimum segment rates as set in law in 2012 as part of the Moving Ahead for Progress in the 21st Century Act (MAP-21). The MAP- 21 maximum and minimum apply to plans that use the segment rate approach, and are based on long-term (25-year) bond averages. MAP- 21’s effect on discount rates is designed to be temporary. Because average interest rates over the past 25 years are significantly higher than more recent market rates, the MAP-21 changes had the effect of significantly increasing ERISA discount rates over what they would otherwise have been, thereby lowering measurements of plan liabilities and reducing minimum funding requirements. Under the bond-based approach used by private sector single-employer pension plans under ERISA (as well as common practice under FASB), a single pension plan will use different discount rates to calculate the present value of benefits that will be paid out at different points in the future. This means that different plans can end up with very different average rates of discount depending on the age of the plan’s participants. In contrast, a public plan using an assumed rate of return would typically discount all future benefit payments at the same assumed return. Because the assumed return is based on asset allocation, the rates will vary depending on how a plan allocates its assets. This is an example of how the bond-based approach determines discount rates based on characteristics of a plan’s liabilities, whereas the assumed-return approach determines discount rates based on characteristics of the assets being used to finance the liabilities. As discussed, under ERISA, private sector multiemployer plans generally discount using an assumed rate of return for funding purposes. However, these plans also calculate a liability based on a bond-based discount rate under ERISA, which can sometimes affect the timing of minimum required contributions and is also used in the calculation of the maximum deductible contribution. Experts had differing views of the significance of the reporting of this measure. FASB financial reporting standards are separate from ERISA funding standards but plan sponsors also typically use high-quality corporate bond rates to compute liabilities, with some key differences. The corporate bond rates that plan sponsors use to satisfy FASB standards are snapshots of market interest rates on the measurement date, producing liabilities based on current market interest rates. This approach is different from discount rates based on ERISA segment rates, which are averages of past and present rates. Also, FASB standards allow companies to select a hypothetical matching bond portfolio, or a yield curve (extrapolated for projected benefits with long durations), rather than rely on one particular set of published rates (such as those produced by Treasury for ERISA). Also, unlike the ERISA funding target, which is based on worker’s service and salary to date, FASB requires companies to report a projected benefit obligation, which for pension benefit formulas based on compensation includes an assumption of future benefit growth due to salary increases. Even though some experts we spoke to cited the historical returns of assets in a typical pension plan portfolio as evidence for the appropriateness of assuming a rate of return of around 8 percent, we found several challenges with using historical data to generate or support an assumed-return assumption. First, reliance on returns during overlapping rolling historical periods has significant statistical limitations. Second, historical returns vary with the time period used in the analysis. Furthermore, future return expectations will depend in part on current economic variables that may not be consistent with any particular historical time period. Third, actual returns for any particular plan would also depend on plan characteristics and cash flows. Fourth, investment returns and plan benefit levels are not independent variables. Also, the potential use of historical returns is only relevant within the context of an assumed-return approach to discounting because a bond-based approach relies on observable market prices for bonds, annuities, or other alternatives. Other experts argue that, on the contrary, investment risk in risky assets such as equities increases in magnitude the longer the time horizon. the 1926 to 2012 period.conclusions about investment risk over 30-year periods solely from this historical record. In analyzing the 1926-2012 historical period, modeled returns vary with particular subsets of years during this historical period and with the assumed allocation of plan assets. For the entirety of this 87-year historical period, we found that a static portfolio allocation of 60 percent in equities and 40 percent in corporate bonds (“60/40 portfolio”), rebalanced annually and with no intervening net cash flows, would have achieved an However, when “trailing returns” 8.9 percent annualized nominal return. (i.e., returns for the years leading up to the current year) are used to examine historical capital market performance, the historical time period chosen can greatly affect return expectations. For example, as figure 4 shows, for the same 60/40 portfolio, the annualized nominal returns are 6.5 percent over the past 15 years, 10.9 percent over the past 30 years, 9.3 percent over the past 50 years, 8.7 percent over the past 85 years, and again, 8.9 percent over the past 87 years. Even to the extent that such information can be informative as to future expectations, it is not clear how to assign relative credibility between the more recent and the more distant past. Unless otherwise noted, the returns in this section are all geometric means rather than arithmetic means. See discussion at end of this section. Another challenge to drawing conclusions from historical returns is that any approach may not reflect the cash flow patterns of an actual pension plan. Historical return statistics are often “time-weighted” averages, meaning that they reflect average returns over some time period that are independent of the order in which those historical returns occurred. Time- weighted returns do not vary across plans. Of more relevance to an actual pension plan is its “dollar-weighted” average return, which reflects the plan’s cash flow pattern. For example, consider a 10-year period in which returns average 10 percent annually for the first 5 years and 2 percent annually for the second 5 years, for a 10-year average of 6 percent annually, which is the time-weighted average return. However, for a growing pension plan that has net cash inflows (contributions paid in exceeding benefits paid out) during this period, the returns in the second half of the period may be more important than the returns in the first half of the period, because there may be more money at stake in the second half of the period. Consequently, if a growing plan experiences decreasing rates of return, the plan’s dollar-weighted average return may be less than the time-weighted average. To apply this concept to our historical return analysis, we developed two hypothetical pension plans—a growing plan and a maturing plan. Each of these hypothetical plans generated a unique cash flow pattern that broadly reflected its plan characteristics and certain assumptions about the plans. We divided the 87-year period from 1926 to 2012 into three discrete 29-year periods.dollar-weighted returns for each period based on plan assets invested in various investment portfolio allocations using historical return data. For each hypothetical plan, we calculated Our analysis shows that for a 60/40 investment portfolio allocation to stocks and corporate bonds, the dollar-weighted returns of our hypothetical plans can differ significantly from time-weighted returns. The hypothetical growing plan outperformed the time-weighted average in two of the three 29-year historical periods, while the maturing plan underperformed the time-weighted average in all three periods. As figure 5 shows, the hypothetical growing plan would return nearly one percentage point above the time-weighted average return for the period from 1926 to 1954 and almost a quarter of a percentage point above the time-weighted return for the period from 1955 to 1983, but the maturing plan would return nearly 1.25 percent below the time-weighted average return for the period from 1984 to 2012. Many experts cited examples of pension plans for which benefit formulas were increased following periods of robust investment returns. We have also seen examples in more recent years of benefit formulas being decreased in financially distressed plans. These examples indicate that investment returns and benefit levels have not been independent variables. If plan benefits have been more flexible in this way on the upside than the downside—an empirical question—it would mean that some historical investment returns effectively went towards net benefit increases rather than supporting previously existing benefit promises. This is another reason for caution in looking to historical returns to support a particular discount rate. In calculating average annual historical returns, either of two types of time-weighted (i.e., plan-independent) average annual returns can be measured—geometric average return or arithmetic average return. For returns that vary from year to year, the geometric average will always be less than an arithmetic average. Figure 6 shows the differences in these two types of average returns for a 60/40 investment portfolio allocated to stocks and corporate bonds calculated for various trailing periods. As a simplified but illustrative example, consider a two-year historical period where the return is positive 100 percent in year one and negative 50 percent in year two. One dollar invested at the start of this period will grow to 2 dollars at the end of year one and then fall back to 1 dollar at the end of year two, for a total net return of zero over the 2-year period. The geometric average return is zero. The arithmetic average return is positive 25 percent (100 percent minus 50 percent, divided by 2). Experts we spoke with disagreed about whether a forward-looking assumed-return assumption should reflect a geometric average expectation or an arithmetic average expectation. Conceptually, a geometric assumption reflects a median expectation (with a 50 percent chance that actual investment performance will be above or below the assumption) while an arithmetic assumption reflects a mean (average) expectation (with a greater than 50 percent chance that actual investment performance will fall short of the assumption). Canadian employees were covered under a registered pension plan in Canada. Of those, 74 percent or nearly 4.5 million Canadians were participants in defined benefit plans, with an aggregate market value of assets of about 1.1 trillion Canadian dollars. Defined benefit plans are generally regulated at the provincial level, with some regulated by a separate federal regulator, so policies can vary by province and the federal level. Most defined benefit plans, both public and private, are prefunded. With the exception of Ontario province, which has pension insurance that insures a nominal benefit of up to one thousand Canadian dollars per month, there is no pension insurance program in Canada. been exempted from the solvency assessment for funding purposes, or have been granted temporary solvency funding relief, as they are considered going-concerns. These plans make contributions based on an assumed-return discount rate. However, these plans must also provide a solvency-based liability measure to plan sponsors, plans members, and regulators. The bond-based rates used in the solvency assessment reflect a combination of (i) a formula tied to Canadian government bond rates plus a spread intended to approximate the results that would be obtained from discounting using a full yield curve based on highly rated provincial bonds (for participants who would be assumed to take a lump sum), and (ii) the market prices insurers charge for immediate and deferred annuities (for participants who would be assumed to take an annuity). Funding requirements and discounting approaches (cont.) Although the assumed-return rate used in the going-concern assessment is similar in concept to the approach applied by public sector plans and private sector multiemployer plans in the U.S., the return assumptions differ between the two countries, with assumed returns in Canada typically at 6 percent or lower, reflecting both lower best-estimates of assumed returns and, in some cases, the subtraction of a margin for adverse deviation. For financial reporting purposes, private sector plan sponsors in Canada often follow the accounting standards promulgated by the International Accounting Standards Board (IASB). Regulator and regulatory principles At the federal level, the Office of the Superintendent of Financial Institutions (OSFI) regulates and supervises private pension plans in federally regulated areas of employment, such as banking, telecommunications and inter-provincial transportation. Each province has its own regulatory body for pension plans under its jurisdiction. The majority of registered defined benefit pension plans are under the jurisdiction of either the regulator in Ontario, Quebec, or with the OSFI. Regulatory principles are generally similar across all regulators, whether provincial or federal. Generally, provincially-regulated plans are assessed once every three years, while federally-regulated plans and plans registered with the Québec regulator are assessed annually. Canadian regulators have the authority to reject an actuarial report which allows them to implicitly set boundaries for reasonable assumptions. Funding requirements and discounting approaches All plans discount their liabilities using a bond-based approach. In 2013, defined benefit plans accounted for 78 percent of all retirement plans in the Netherlands. Participants in those plans represented nearly 93 percent of all active pension plan participants. With regard to the discount rate, the regulator makes no regulatory distinctions between public, private, or multiemployer defined benefit pension plans. Pension plans are separate legal entities from plan sponsors and there is no pension insurance program. Plan benefits can vary with investment performance and funded status. base funding target of 105 percent using prescribed market interest rates. Plans can attempt to provide inflation indexed benefits by investing in riskier asset portfolios. Inflation indexed benefits are granted only to the extent they are supported by actual investment returns. However, base funding targets are also “risk-adjusted,” meaning the base funding target is increased the riskier the plan’s asset allocation, in order to provide a buffer against investment risk. An official told us that a common asset allocation of 50 percent equity, 40 percent bond, and 10 percent real estate would require a plan to be 120 percent funded, based on a 2.5 percent probability of shortfall in a 1-year horizon. For determining minimum required contributions, plans may use either market interest rates, a 10-year moving average of market interest rates, or assumed returns. However, the funding target would still be the risk-adjusted target based on the bond-based liability. For future projections of assets and liabilities, plans may use assumed returns. Plans in recovery are allowed to assume investment returns based on plan asset allocation to project reaching funding targets within the recovery period. Funding requirements and discounting approaches (cont.) Plans with funded ratios less than the 105 percent base funding target must submit a recovery plan to return to full funding within 3 years. Nominal accrued benefits can be reduced for plans that do not achieve the base funding target within this allotted time. Plans with funded ratios less than the risk-adjusted funding target (specific to the plan’s asset allocation) must reduce or eliminate inflation-indexation and submit a recovery plan to return to this funding target within 15 years. For financial reporting purposes, private sector plan sponsors in the Netherlands often follow the accounting standards promulgated by the International Accounting Standards Board (IASB). Regulator and regulatory principles De Nederlandsche Bank (DNB) examines the financial position of pension funds and regulates discount rates. The Netherlands Authority for the Financial Markets monitors market conduct relating to pension funds’ obligations to provide information to members. The DNB publishes discount rates on a monthly basis. Pension plans must submit quarterly and annual reports to the DNB. When using assumed returns, the maximum expectations that can be used are regulated. Currently, the maximum acceptable assumed return on the equity portion of the portfolio, as established by an independent commission, is 7 percent (the overall assumed return would also reflect the other asset classes in the portfolio). In 2012, participation in workplace pension plans was at 46 percent. 91 percent of public sector employees with workplace pensions had a defined benefit plan while 26 percent of private sector employees with workplace pensions were in such plans. Overall, the proportion of employees with defined benefit pension plans continued to fall, with 28 percent of employees participating in such plans in 2012, compared with 46 percent in 1997. benefit pension plans must use beyond the requirements that actuarial valuations must use an accrued benefit method, assets must be at market value, and economic and actuarial assumptions must be chosen prudently based on circumstances specific to the plan. Regulations specifically allow plans to use either a bond-based, assumed-return, or a combination of both approaches to determine its discount rate for funding purposes. Under U.K.’s Scheme Specific Funding framework, discount rates used by private plans for funding purposes are plan-specific and may incorporate elements of both the bond-based and assumed-return approaches. employer sponsor to support the plan, known as the “employer covenant,” in plan assumptions. A strong sponsor can have some justification for using a somewhat higher discount rate, but the regulator cautions plan trustees to regularly assess sponsor strength because it may be subject to significant variation over relatively short periods of time. Conversely, a weak sponsor may find it prudent to take less risk and use a discount rate that assumes lower returns above safe bond yields. autonomous from the sponsoring employer. Trustees and employers negotiate in setting plan policies, with assumptions, including the discount rate, and methods subject to a risk-based review by the Pensions Regulator. Discount rates for funding purposes frequently differ between the retired and current worker portions of the plan populations. The projected benefits of retired plan participants are frequently discounted largely with reference to U.K. government bond rates, known as “gilts,” and to corporate bond rates. The projected benefits of current workers (and deferred members) are frequently discounted at gilt rates plus 2 to 3 percent for the period up to retirement. A national pension insurance program administered by the Pension Protection Fund (PPF) provides compensation to members of eligible, largely private sector defined benefit pension plans when there is a qualifying insolvency event in relation to the employer, and where there are insufficient assets in the pension plan to cover the PPF level of compensation. Plans under recovery are allowed to assume a higher return, over the recovery period, than the discount rate used to calculate the plan’s liability. For financial reporting purposes, private-sector plan sponsors in the United Kingdom often follow accounting standards promulgated by the local Financial Reporting Council (FRC) or the International Accounting Standards Board (IASB). FRC and IASB standards take an approach to the discount rate that is broadly similar to FASB in the United States. The regulation states that “the rates of interest used to discount future payments of benefits must be chosen prudently, taking into account either or both–(i) the yield on assets held by the scheme to fund future benefits and the anticipated future investment returns, and (ii) the market redemption yields on government or other high-quality bonds.” Accounting standards developed by the FRC are contained in Financial Reporting Standards, referred to as FRS. Regulator and regulatory principles The Pensions Regulator is responsible for regulating work-based pension plans, which includes occupational defined benefit and defined contribution plans as well as certain aspects of work-based personal pensions. It has the authority to oversee the administration of these plans and contributions made to them based on its objective to protect the benefits under occupational pension plans of, or in respect to, members of such plans. Private plan sponsors must prepare actuarial valuations on at least a triennial basis (provided they also produce annual updates-–otherwise they have to do annual valuations). Plans in deficit, and which have therefore prepared a recovery plan, must submit details of the recovery plan and valuation to the regulator. Plans in surplus must submit details of their valuation along with their regular plan data updates. The regulator conducts a risk-based assessment to determine if additional scrutiny or actions are necessary. In addition to the contact named above, Kimberly Granger (Assistant Director), David Lin (Analyst-in-Charge), Amy Buck, and Aron Szapiro made key contributions to this report. Also contributing to this report were James Bennett, Susan Bernstein, Kenneth Bombara, David Chrisinger, Robert Dacey, Michael Hoffman, Gene Kuehneman Jr., Kathy Leslie, Ashley McCall, Sheila McCoy, Nhi Nguyen, Susan Offutt, Max Sawicky, Margie Shields, Roger Thomas, Kate van Gelder, and Amber Yancey- Carroll. Private Pensions: Timely Action Needed to Address Impending Multiemployer Plan Insolvencies. GAO-13-240. Washington, D.C.: March 28, 2013. State and Local Government Pension Plans: Economic Downturn Spurs Efforts to Address Costs and Sustainability. GAO-12-322. Washington, D.C.: March 2, 2012. State and Local Government Pension Plans: Governance Practices and Long-term Investment Strategies Have Evolved Gradually as Plans Take On Increased Investment Risk. GAO-10-754. Washington, D.C.: August 24, 2010. State and Local Government Retiree Benefits: Current Status of Benefit Structures, Protections, and Fiscal Outlook for Funding Future Costs. GAO-07-1156. Washington D.C.: September 24, 2007.
Defined benefit plans use interest rates to “discount,” or determine the current value, of estimated future benefits. Experts in the United States have disagreed on both the approach that should be taken by plans to determine a discount rate and the appropriate rate to be used. Different discount rates can create large differences in the valuation of a plan's obligations, which in turn can lead various stakeholders to draw different conclusions about a plan's health, the value of a plan's benefits, and the contributions required to fund them. As requested, GAO examined different approaches used to determine the discount rate. This report addresses (1) the significance of differences in approaches used to determine discount rates among public and private plans; (2) purposes for measuring the value of a plan's future benefits and key considerations for determining discount rate policy; and (3) approaches selected countries have taken to choose discount rates. For this review, GAO analyzed provisions in relevant federal laws and regulations, as well as financial reporting and actuarial standards. GAO also reviewed relevant literature and interviewed experts, including experts in Canada, the Netherlands, and the United Kingdom—countries with significant defined benefit systems. In addition, GAO modeled hypothetical pension investment portfolios and cash flows to calculate average investment returns using available historical data. Public and private sector defined benefit pension plans are subject to different rules and guidance regarding discount rates—interest rates used to determine the current value of estimated future benefit payments. These differences can result in significant implications: Sponsors of public sector plans generally use discount rates using a long-term assumed average rate of return on plan assets. This approach results in reported obligations that generally appear lower than those of comparable private sector single-employer plans. Some experts believe this approach may encourage public plans to invest in riskier assets, which can increase the assumed return and thereby lower estimated obligations and plan contributions. Other experts believe this approach helps to maintain more predictable and lower costs. Private sector multiemployer plans generally use an assumed rate of return for funding purposes. Sponsors of private sector single-employer pension plans use bond-based discount rates, which are generally lower than assumed rates of return, for financial reporting of their plans' liabilities. Experts believe this approach may encourage plans to invest in less risky assets, particularly high-quality bonds, to make pension costs less volatile, but it may increase current reported costs. Funding requirements for these plans are tied to historical interest rates, which can reduce funding compared to measures based on more recent interest rates. Experts identified at least five purposes for measuring the value of future benefits where discount rates are used, including determining sponsor contributions, reporting plan liabilities to stakeholders, determining the amount needed to secure benefits, measuring the value of employee benefits, and determining lump sum settlement amounts. They also identified a variety of considerations in setting discount rate policy, including cost, risk, fairness, sustainability, transparency, and comparability. To address trade-offs among these varied and sometimes competing purposes and considerations, many experts saw value in reporting multiple measures of plan obligations, using different discount rates. Some experts also regarded assumed returns used by U.S. public plans as too high under current market conditions. Selected countries we examined reported that they apply a variety of approaches to discounting. Canada requires determination of multiple measures of plan obligations, based on both assumed returns and high-quality bond rates and annuity prices. The Netherlands requires that plan obligations be measured based on market interest rates, but allows the use of assumed returns for determining plan contributions or developing recovery plans. In the United Kingdom, discount rates are determined on a plan-specific basis and can include some allowance for assumed returns in excess of high-quality bond rates, depending on plan characteristics and the strength of the sponsor. To the extent that plans in these countries use long-term assumed rates of return, they are generally lower than the 7.5 to 8 percent used by many U.S. public plans under recent market conditions. Experts GAO interviewed in these countries described a greater degree of government oversight which might help explain their use of lower assumed returns. GAO is not making any recommendations in this report.
The Great Lakes contain over 95 percent of the nation’s surface freshwater supply for the contiguous 48 states and more than 20 percent of the world’s freshwater supply. The lakes provide water for drinking, transportation, power, recreation—such as swimming and fishingand a host of other uses for more than 30 million people who live in the Great Lakes Basin, roughly 10 percent of the U.S. population and more than 30 percent of the Canadian population. Spanning more than 750 miles from west to east, the basin encompasses nearly all of the state of Michigan and parts of Illinois, Indiana, Minnesota, New York, Ohio, Pennsylvania, Wisconsin, and the Canadian province of Ontario. Parts of the St. Lawrence River, the connecting channel between Lake Ontario and the Atlantic Ocean, flow through the provinces of both Ontario and Quebec. Recognizing their mutual interests in the Great Lakes and other boundary waters, the United States and Great Britain signed the Boundary Waters Treaty in 1909, which provided the United States and Canada with a framework for dealing with future issues along the border. The treaty established the International Joint Commission (IJC), comprising three commissioners each from the United States and from Canada, to help the two governments resolve and prevent disputes concerning their shared boundary waters. Among other things, the IJC also assists the governments in the implementation of the GLWQA, reports every 2 years on implementation progress, and offers nonbinding recommendations to the two governments. Signed in 1972, the GLWQA focused on restoring and enhancing water quality in the lakes and controlling phosphorous as a principal means of dealing with eutrophication in the lakes. Under the terms of the GLWQA, the two governments are required to conduct a comprehensive review of the operation and effectiveness of the agreement every 6 years. The next review is scheduled to begin in 2004, and based upon the results, the two countries may decide to amend the agreement. The last review in 1999 found that certain sections of the agreement were outdated and revisions were needed. As amended, the GLWQA has 17 annexes that define in detail the specific programs and activities that the two parties have agreed upon and committed to implement. Most of the annexes specify pollution prevention strategies. Annex 11 of the GLWQA calls for the parties to implement a joint surveillance and monitoring program that, among other things, evaluates water quality trends, identifies emerging problems, and supports the development of remedial action plans for contaminated areas—- referred to as areas of concern—-and LaMPs for the open waters of each of the five lakes to reduce critical pollutants and to restore and protect beneficial uses. Specifically, Annex 11 calls for the monitoring program to include baseline data collection, sample analysis, and evaluation and quality assurance programs to assess such things as whole lake data including that for open waters and nearshore areas of the lakes as well as fish and wildlife contaminants; inputs from tributaries, point source discharges, atmosphere, and connecting channels; and total pollutant loadings to and from the Great Lakes system. The monitoring program under Annex 11 is to be based on the Great Lakes International Surveillance Plan (GLISP) developed before the current requirements for a surveillance and monitoring system. Developing the surveillance plan, which involved developing a separate plan for each lake, required extensive efforts by U.S. and Canadian officials over several years. However, according to one Canadian official involved in the process, the plans were not completed to the point where they could be implemented. The IJC’s Water Quality Board was involved in the management and development of the GLISP, but according to a binational review of the GLWQA in 1999, the IJC’s role was reduced after the GLQWA amendments of 1987 placed more of the responsibility for data analysis and reporting on the state of the Great Lakes environment with the two governments. IJC’s role today is one of assisting in the implementation of the agreement and evaluating the actions of the two governments in meeting the objectives of the GLWQA. After the GLISP effort, the governments reduced support for the surveillance and monitoring called for in the agreement, and abandoned the organizational structure created to implement the monitoring plan, leaving in place only one of the plan’s initiatives, the International Atmospheric Deposition Network (IADN), a network of 15 air-monitoring stations located throughout the basin developed in response to the GLWQA requirement of a monitoring program to allow assessment of inputs from the atmosphere affecting the Great Lakes. In addition, under a separate annex in the GLWQA (Annex 2), LaMPs are required to include, among other things, a description of the surveillance and monitoring to be used to track the effectiveness of remedial measures and the elimination of critical pollutants. The agreement requires that updates to the LaMPs be submitted to the IJC for review and comment. IJC is considering whether to conduct a review of the LaMPs in 2004. The Water Quality Act of 1987 amended the Clean Water Act to state that EPA should take the lead and work with other federal agencies and state and local authorities to meet the goals in the agreement. It also established within EPA, GLNPO, to among other things, coordinate EPA’s actions aimed at improving Great Lakes water quality both at headquarters and at the affected EPA regional offices, and to coordinate EPA’s actions with the actions of other federal agencies. As of 2003, GLNPO’s budget was $16 million, including $5 million allocated for program costs, which includes 47 full-time EPA staff and 13 non-EPA staff. The remaining costs included about $4.3 million per year for monitoring and monitoring-related reporting, which included about $1.4 million to operate GLNPO’s research vessel, the Lake Guardian. For Canada, Environment Canada (EC) is the lead agency, which works in cooperation with the provinces of Ontarioin which parts of four of the lakes are locatedand Quebec, which administers the St. Lawrence River. Coordination between EPA and EC is achieved through the Binational Executive Committee (BEC). Subsequent to the GLQWA amendments of 1987, the BEC was formed to coordinate programs and policies of the two parties to facilitate GLWQA implementation. BEC, co-chaired by EPA and EC, meets twice a year and membership includes federal, state, and provincial officials from organizations involved in Great Lakes activities. The BEC does not have authority to direct that projects or programs be implemented but rather makes recommendations regarding certain activities, such as the development of SOLEC. Funding provided for BEC operations is limited, and it relies on funding from other organizations to implement its recommendations. In addition to the BEC, several organizations serve coordinating roles, offer policy perspectives, or financially support restoration activities for the Great Lakes, including the following: Council of Great Lakes Governors, a partnership of governors from the eight Great Lakes states and the Canadian provinces of Ontario and Quebec, encourages and facilitates environmentally responsible economic growth throughout the Great Lakes region. Great Lakes Commission, an organization promoting the orderly, integrated, and comprehensive development, use, and conservation of water and related natural resources of the Great Lakes Basin and the St. Lawrence River, includes representatives from the eight Great Lakes states and the Canadian provinces of Ontario and Quebec. Great Lakes United, an international coalition group dedicated to preserving and restoring the Great Lakes-St. Lawrence River ecosystem, promotes effective policy initiatives, carries out education programs, and promotes citizen action and grassroots leadership for Great Lakes environmental activities. The coalition’s member organizations represent environmentalists, conservationists, hunters and anglers, labor unions, communities, and citizens of the United States, Canada, and First Nations and Tribes. United States Policy Committee, a group of senior level representatives from federal, state, and tribal government agencies with environmental protection or natural resource responsibilities in the Great Lakes Basin. The group meets semiannually to coordinate agency actions and commitments associated with the Great Lakes Strategy 2002. Great Lakes Fishery Commission, a binational commission created by the Convention on Great Lakes Fisheries between the United States and Canada in 1955, whose primary objectives are to coordinate fisheries management and research, and to control sea lamprey. The U.S. Department of State and Canada’s Department of Fisheries and Oceans provide funding for the commission. Great Lakes Interagency Task Force, an organization created within EPA by executive order to provide coordination of federal activities and promote regional collaboration within the Great Lakes Basin and among other things, to develop outcome based goals for the Great Lakes system. Assisting the task force is a working group composed of regional federal officials with GLNPO providing resources for both groups. Current EPA monitoring efforts do not provide comprehensive information on the condition of the Great Lakes, and the coordinated joint surveillance and monitoring program called for in the GLWQA has yet to be fully developed. Other ongoing monitoring efforts by federal and state agencies yield information that is limited to specific purposes and geographical scope. The joint efforts by the United States and Canada to develop information on Great Lakes indicators through the SOLEC process does not fulfill the monitoring requirements of the GLWQA or adequately assess basin-wide conditions of the lakes. Further, the information reported from SOLEC is of questionable value to officials making restoration decisions because it is not based on their decision- making needs. Additionally, current monitoring efforts of federal and state organizations do not, by design, provide comprehensive information on the overall conditions of the Great Lakes. Most of the information collected under these monitoring activities is designed to meet specific program objectives or is limited to specific geographic areas as opposed to providing an overall assessment of the Great Lakes Basin. Annex 11 of the GLWQA calls for the United States and Canada to develop a joint Great Lakes system-wide surveillance and monitoring program to, among other things, provide information on restoration progress and whether the objectives of the agreement are being achieved. This program, however, has not been fully developed. Instead, officials from GLNPO look upon SOLEC as the process by which indicators will be developed to monitor environmental conditions and measure restoration progress in the Great Lakes. However, as we reported in 2003, the SOLEC process of holding conferences every 2 years to develop Great Lakes indicators and monitor environmental conditions for subsequent reporting on the state of the lakes falls short in several areas. First, indicators assessed through the process do not provide an adequate basis for making an overall assessment of Great Lakes restoration because they rely on limited quantitative data and subjective judgments. Second, the SOLEC process is dependent on the voluntary participation of officials from federal and state agencies, academic institutions, and other organizations. As a result, their future commitment to providing information on indicators and monitoring results, along with their future participation, is not assured. Finally, most of the stated objectives for SOLEC do not align with the surveillance and monitoring program envisioned in the GLWQA. The stated objectives of SOLEC are to assess the state of the Great Lakes ecosystem based on accepted strengthen decision making and management, inform local decision makers of Great Lakes environmental issues, and provide a forum for communication and networking among stakeholders. Other than the objective for assessing the state of the ecosystem based on accepted indicators, the SOLEC objectives do not address issues related to monitoring. GLNPO officials stated that the objective of SOLEC is not to be a monitoring program but rather a reporting venue for conditions in the Great Lakes. However, it is the only ongoing effort to provide an overall assessment of the Great Lakes and, according to 23 federal, state, and other environmental program officials, a surveillance and monitoring system is still needed. For example, a Michigan state official explained that a monitoring system developed with the involvement of all stakeholders and focused on the differences in individual lakes is needed. Appendix III contains the specific comments from the officials we contacted regarding the need for a monitoring system. The monitoring information developed and reported by SOLEC is of questionable value to officials responsible for making restoration decisions for several reasons. First, the information is not based on their decision-making needs. State and federal agency officials stated that the SOLEC process is not connected with the policy-making process. For example, a Minnesota Pollution Control Agency official stated that the SOLEC process is oriented toward the needs of researchers and has not connected with the policy-making process for which indicators are needed. A Michigan Department of Environmental Quality official stated that SOLEC provides information based on data from only one or two sampling locations and is not relevant from a state program perspective. Canadian program officials shared these opinions, and one official added that SOLEC data does not address local community questions or program objectives. The comments by program officials are supported by results from a peer review of SOLEC in 2003 by an international panel of experts in large indicator systems. While the panel had many favorable observations of SOLEC, they noted a disconnect between the development of the indicators and their usefulness to policy makers. The peer review stated that, to be effective, the actual users must define indicators, with policy makers and environmental managers involved in the early stages of indicator development. In addition to these observations, in the latest report on the state of the Great Lakes, one of the management challenges discussed is how to better assist managers given the large number of indicators. Specifically, the challenge is to find a method of indexing indicators that better assists managers and leads to more useful, informed decision making. The disconnect between SOLEC and decision makers is further illustrated by the fact that only two of the eight Great Lakes states we contacted were reporting information from local monitoring efforts to support the SOLEC process and that none of the states reported using the monitoring information published by SOLEC to describe conditions of its local water bodies or to measure restoration progress. One Minnesota official stated that the former head of the state environmental agency viewed SOLEC information as irrelevant to describe conditions within the state. A GLNPO official working on SOLEC stated that developing effective indicators requires that you first ask what is to be measured, what the best indicator is for this measurement, how much data are needed, who will collect and handle the data for consistency, and how often the measurement will take place. He stated that the need to ask these questions dates back to the early 1980s, but actions to implement this indicator-monitoring program never materialized. Instead, different indicators and monitoring programs are being conducted by various agencies using different sampling methodologies and protocols, and this inconsistent local program information cannot, after the fact, be used to make decisions about system-wide needs or environmental conditions. Second, SOLEC information is based on limited data that further detracts from its usefulness to decision makers. For example, of the 80 SOLEC indicators reported to describe the Great Lakes Basin in 2003, evaluative data were only available for 43 of them. Often this data was geographically limited and did not address conditions within the entire basin. Additionally, the IJC reported in its 2002 biennial report that sufficient data were not being collected from around the Great Lakes and that the methods of collection, the data collection time frames, the lack of uniform protocols, and the incompatible nature of some data jeopardized their use as indicators. Third, there is no guarantee that SOLEC information will be consistently collected or will be available in the future. As we reported earlier, the SOLEC process involves individuals providing information on a voluntary basis with the indicator data residing in a diverse number of sources with limited control by SOLEC organizers. Therefore, there is no assurance that the information will continue to be collected or consistently reported over time. Environmental program officials from federal, state, and provincial agencies stated that the process lacks sufficient and consistent monitoring information to measure environmental restoration progress. The SOLEC peer review group found that the SOLEC process has serious flaws regarding lack of repeatability and transparency. According to GLNPO officials, SOLEC organizers attempted to address the issue of repeatability and transparency in 2003 by issuing a technical report, which provides additional information on data sources. Further, the process is lacking in standard methodology, and SOLEC has yet to establish standard protocols to improve data comparability and reliability. One attempt to measure restoration progress in the basin using SOLEC indicators is presented in EPA’s fiscal year 2005 budget justification. To measure progress, a single quantitative score is derived based on a formula using eight SOLEC indicators. Each indicator is given a score from 1 to 5 based on the professional judgments of individuals providing the indicator information. A score of 1 is considered poor, and 5 is considered good. Totaling the individual indicator scores resulted in a score of 20 based on a total 40-point scale for the Great Lakes. While this is an attempt to measure overall progress, the scoring process is based on a limited number of indicators, and the point scores are based on subjective judgment. Further, the indicators described in the budget justification do not align with the ones used in developing the scores. According to GLNPO officials, this may have resulted from information being submitted at different times during the development of the budget justification. In addition to EPA’s efforts, several federal and state agencies conduct monitoring for specific purposes within the open waters, nearshore, and inland areas of the Great Lakes Basin. Monitoring is done in these areas for assessing environmental conditions, as part of ongoing federal or state programs, or for research purposes. The geographic areas monitored are generally limited and only specific conditions are monitored. In a few cases, such as monitoring the air deposition of toxic substances, monitoring of specific conditions covers an extensive area. Monitoring by state organizations is generally limited to federal or state program purposes and conducted in the nearshore or inland areas of the basin, such as identifying impaired waterways that may be tributaries to the lakes under the Clean Water Act. Open lake monitoring is generally done by federal agencies, like GLNPO, for specific research or program purposes and not as part of an overall assessment of the Great Lakes. Four federal agencies, EPA, National Oceanic and Atmospheric Administration (NOAA), U.S. Geological Survey (USGS), U.S. Department of Interior’s Fish and Wildlife Service (FWS), and one international commission, the Great Lakes Fishery Commission (GLFC), have ongoing monitoring activities for specific purposes within limited areas of the Great Lakes Basin. EPA’s GLNPO conducts four monitoring activities. First, GLNPO conducts annual monitoring of open lake water areas for the specific purpose of gathering information on water quality and biological conditions. The information gathered includes toxic pollutant levels of persistent substances, such as phosphorous. These sampling efforts are generally conducted twice each year, once in spring and once in summer, when the Lake Guardian travels to various fixed sampling sites on each of the lakes (see fig. 2). Sampling information collected during these assessments is stored in an automated database and is limited to assessing long-term trends in open lake waters. GLNPO officials stated that it takes about 6 to 7 years of data before enough information is available to identify a long-term trend. Second, GLNPO conducts monitoring of sediment contaminants in the nearshore areas of the Great Lakes that involves biological and chemical sampling for benthic-bottom soil-contamination. Data is collected from several sampling stations throughout the lakes to assess, among other things, the presence of small invertebrates in bottom sediments. These data are assessed with open lake data to determine possible adverse impacts on the food web that ultimately pose a human health risk. The scope of sediment monitoring is limited to certain areas, and GLNPO officials stated that they believe their main responsibility is open lake monitoring under the GLWQA and that the Great Lakes states are responsible for inland and tributary monitoring. Third, GLNPO conducts the U.S. portion of IADN for the specific purpose of monitoring toxic substances deposited through the air. Monitored toxic substances include polychlorinated biphenyls (PCB) and trace metals, such as lead and cadmium, that have entered the watershed. While GLNPO is responsible for monitoring in the United States, EC is responsible for Canadian locations. IADN consists of 5 master sampling stations and 10 satellite stations located throughout the basin and is limited to identifying substances deposited through the air. Fourth, GLNPO conducts an annual fish program to monitor concentrations of contaminants in Great Lakes fish. GLNPO has agreements with the Universities of Minnesota, Indiana, and Wisconsin, along with USGS, to collect specific fish species from each lake and grind them into paste to analyze for contaminants that might pose a risk to humans if consumed. In addition to GLNPO’s monitoring efforts, EPA’s Office of Research and Development (ORD) funds research activities involving developing indicators and Great Lakes monitoring. There are four divisions within ORD’s National Health Environmental Effects Research Laboratory (NHEERL), and one of thesethe Mid-Continent Ecology Division located in Duluth, Minnesotaconducts research related to fresh water issues involving human health, which includes the Great Lakes. In addition to the research conducted by this office, ORD, through its National Center for Environmental Research, has an ongoing cooperative agreement with the Natural Resources Research Institute (NRRI) of the University of Minnesota, Duluth, to develop environmental indicators specifically for the nearshore areas of the Great Lakes. Once NRRI develops indicators for all of the nearshore areas, the results will be published and submitted to ORD for developing an implementation plan measuring environmental conditions in the Great Lakes, according to NRRI researchers. Two other federal agencies, NOAA and USGS, conduct monitoring for specific purposes within the basin. NOAA’s Great Lakes Environmental Research Laboratory (GLERL) located in Ann Arbor, Michigan, has 15 specific legislative mandates for research or monitoring, according to a GLERL official. Specific research efforts by NOAA are in areas such as water quality, quantity, and levels. NOAA is also developing an experimental Great Lakes Observing network. This network will consist of observation buoys that are linked to satellites, strategically located throughout the five Great Lakes, for collecting specific chemical, physical, and biological information needed for ecosystem forecasting. A NOAA prototype system is deployed in Lake Erie, using three buoy sites, and focused on gathering information on the reemergence of the lake’s dead zone. USGS conducts monitoring in the Great Lakes through its Great Lakes Science Center located in Ann Arbor, Michigan. This monitoring is conducted in the open lake areas as part of its fish assessment program. The center operates five research vessels, one for each of the five Great Lakes, to conduct research and monitoring for specific purposes, such as determining the volume and presence of predator fish. USGS also conducts monitoring in the Great Lakes Basin through its National Water Quality Assessment (NAWQA) program to determine the presence of pesticides, nutrients, volatile organic compounds, and other contaminants in streams, groundwater, and aquatic ecosystems. Of the 42 NAWQA studies conducted nationwide, 2 are within the Great Lakes Basin. Finally, FWS and other organizations conduct monitoring to determine the sea lamprey impact on specific fish species, such as the lake trout. This monitoring is funded by the GLFC and according to several restoration officials, is the most comprehensive, coordinated, and consistently funded monitoring efforts ongoing in the Great Lakes. The commission receives about $16 million annually from the United States and Canada to carry out activities to control the sea lamprey population and monitoring activities to measure the success of these control efforts. In addition to monitoring the sea lamprey, each of the Great Lakes states monitors fish populations and their habitats as a major component of the fish monitoring program. The primary objective of the fish monitoring program is to assess changes in fish populations for the purpose of restocking to meet local community and angler objectives. The fish monitoring programs are generally initiated and funded by state agencies, with monitoring results coordinated by the GLFC. In each state, monitoring in the Great Lakes Basin is a mix of activities done for both federal and state requirements. Each of the Great Lakes states conducts monitoring for federal program requirements, which include identifying impaired water bodies within the state, including the Great Lakes Basin, and developing Total Maximum Daily Load (TMDL) limits for identified pollutants as required under the Clean Water Act. However, because each state uses its own criteria and time schedule for identifying impaired water bodies, the process is not done consistently throughout the United States or the Great Lakes Basin. Another example of a federal program involving state monitoring is the Beach Monitoring Program under the Beach Act. This program involves sampling of only the nearshore waters of state beaches for the presence of bacteria to determine if the water is safe for swimming. In addition, states conduct monitoring in the Great Lakes Basin for state requirements. For example, in Ohio, two state agencies—the Ohio Environmental Protection Agency and the Ohio Department of Natural Resourcesconduct routine monitoring in Lake Erie’s nearshore and inland areas for several state and federal programs. These agencies conduct monitoring to assess water quality in the state’s streams and rivers, ambient groundwater quality, tributary quality, and changes in fish and wildlife populations. Appendix IV contains information on nine programs involving monitoring activities in Ohio. In addition to federal program monitoring, some states fund and conduct their own monitoring activities in the Great Lakes Basin. The extent to which states conduct their own monitoring activities beyond federal requirements is closely tied to available state funding for monitoring. State organizations generally conduct monitoring activities in the nearshore or inland areas. For example, Michigan has a state program to address water quality issues with funding specifically devoted to monitoring. Voters approved a special state bond issue authoritythe Clean Michigan Initiativein 1998, which provided funding to the Michigan Department of Environmental Quality for surface water quality monitoring. Supported by initial Clean Michigan Initiative funding in 2000, the Michigan program funds monitoring activities in the state’s rivers, streams, tributaries, and Great Lakes water bodies. Among other things, monitoring is conducted to assess contaminant levels in fish and other wildlife, as well as water and sediment. Multiple restoration goals have been proposed by EPA and other organizations that could be a basis for monitoring restoration progress. EPA developed basin-wide goals in its Great Lakes Strategy 2002 and goals for individual lakes in LaMPs. Other organizations concerned with Great Lakes restoration, such as the Council of Great Lakes Governors, have also identified basin-wide restoration goals and priorities. Monitoring progress toward achieving goals is generally limited to tracking specific action items contained in the Great Lakes Strategy 2002; other proposed goals do not have associated monitoring activities or monitoring plans to determine progress. Additional specifics for many of the proposed goals and monitoring plans may be needed if the goals are to be used in determining whether progress is being achieved. EPA’s efforts in developing the Great Lakes Strategy 2002 and LaMPs have resulted in proposed goals for the overall basin and for individual lakes. USPCa group of mainly federal and state officials from the Great Lakes states coordinated by GLNPOdeveloped and published the Great Lakes Strategy 2002, which sets forth 4 overarching goals, 33 subgoals, 23 objectives, and 103 key actions for the Great Lakes. For example, one goal is “to protect human health and restore and maintain stable, diverse, and self-sustaining populations of plants, fish and other aquatic life, and wildlife in the Great Lakes ecosystem.” A key action under this goal is to continue human health studies under the Great Lakes Human Health Effects Research Program and make the results available to environmental managers and the public. For monitoring the progress in achieving the strategy’s goals, GLNPO is tracking the implementation status of the actions in the strategy and, as of May 2003, seven actions were reported by GLNPO as completed. In addition, EPA has participated in developing LaMPs that are the primary means for coordinating and planning ecosystem projects for each lake, according to the Great Lakes Strategy 2002. The GLWQA requires that LaMPs be developed for each lake, with the United States and Canada responsible for preparing the plans in consultation with relevant states and provincial governments. A GLNPO manager for each LaMP coordinates EPA’s efforts to develop the plans. In developing LaMPs, the parties have agreed that they will report progress every 2 years and that updates to each LaMP will be submitted to the IJC for review and comment. LaMPs have been prepared for four of the five Great LakesErie, Michigan, Ontario and Superiorand they present overviews of lake conditions and general restoration needs. For example, the Lake Michigan LaMP sets forth one overall goalto restore and protect the integrity of the Lake Michigan ecosystem through collaborative partnerships—and 11 subgoals. These subgoals are stated as general questions, such as “can we drink the water,” or “can we swim in the water.” The LaMPs also generally discuss indicators and monitoring, but they are not often linked to goals or how progress toward goals will be measured. For example, the Lake Erie LaMP states that a working group discussed indicators, but none were selected. While each LaMP describes monitoring efforts to some extent, they usually do not define how progress to achieve goals will be tracked. An exception to this is a section of the Lake Superior LaMP addressing critical pollutants. See appendix V for goals and monitoring information contained in LaMPs for four of the Great Lakes. Three organizationsthe Council of Great Lakes Governors, Great Lakes Commission, and Great Lakes Unitedhave independently of EPA developed goals for the Great Lakes Basin. The goals are presented in general terms, such as stopping the spread of invasive species or cleaning up contaminated areas. Several of the organizations’ goals are similar, representing a relative consensus among the organizations. While the goals are useful in communicating what specific issues the groups believe are important to the Great Lakes, additional specifics, such as which invasive species are to be controlled or by what time frame, may be needed to determine whether the goals are being achieved. It should be noted that these organizations do not have the resources of federal or state agencies to address proposed goals and priorities and must rely on others to take action. For some of the priorities, specific federal agencies are identified to take actions. The goals or priorities developed by the three organizations are summarized in appendix VI. One recent set of priorities was prepared by the Great Lakes Governors’ Priorities Task Force, which consisted of governors’ representatives for the eight Great Lakes states. After deliberating for approximately 2 years, this group reached consensus in 2003, on nine priorities to guide Great Lakes restoration and protection efforts. These priorities addressed a range of issues including protecting human health and enhancing information collection and standardization. The priorities are defined in general terms, such as “control pollution from diffuse sources into water, land, and air.” Details on the type and causes of pollution to be assessed and the desired outcomes are not further defined. After the priorities were reported, public sessions were held in Great Lakes states to obtain reaction and input on the Governors’ goals. These sessions, however, are not expected to result in further refinement of the priorities. Similarly, the Great Lakes Commission, which includes representatives from the eight Great Lakes states and the Canadian provinces of Ontario and Quebec, established seven priorities for the Great Lakes such as cleaning up toxic hot spots, controlling nonpoint source pollution, and preventing the introduction or limiting the spread of invasive species. Its report outlining the seven major priorities identifies an overall goal for each priority. Each of the goals contains recommendations for actions, and many goals are stated in general terms with funding requests for a particular federal agency or organization for implementation. For example, one action item under the goal for cleaning up toxic hot spots recommends “ensure that polluters responsible for sediment contamination pay their fair share$5 million annually to the U.S. Fish and Wildlife Servicefor Great Lakes projects.” While the Great Lakes Commission lists their seven priorities, it is unclear what specific actions are necessary to achieve the priorities. Great Lakes United, a binational coalition that promotes citizen action and grassroots leadership for Great Lakes environmental activities, published a citizen’s action agenda for the Great Lakes in 2003. This document, and its summary version, describes what members consider to be the seven major challenges to be addressed in the Great Lakes, such as toxic cleanup, protecting and restoring species, and sustaining and restoring water flows. Under each challenge, the agenda recommends several action items for restoring the Great Lakes Basin. Some of these action items have established time frames. Coordinating the establishment of measurable goals and developing a monitoring system for tracking progress in the Great Lakes are difficult tasks that face significant challenges. Of great importance, no single organizational entity has exercised leadership responsibility for coordinating the establishing of specific goals and a monitoring system. As we reported previously, under the Clean Water Act, GLNPO has coordination authority over many Great Lakes activities but has not fully exercised it. Further, it is uncertain whether the Executive Order issued in May 2004, creating a Great Lakes Interagency Task Force, will provide the needed stability in leadership. Second, the restoration goal setting and monitoring efforts ongoing by numerous governmental and nongovernmental organizations in the United States and Canada will create a challenge for coordinating within and between the two countries. Specific obstacles include coordinating the goal setting efforts of the various Great Lakes organizations and accounting for ongoing agreements within Canada when developing the joint monitoring system called for in the GLWQA. Third, coordinating information derived from the various monitoring activities of the numerous groups involved in the Great Lakes is a significant challenge. The lack of a centralized repository of monitoring information makes it difficult to assess restoration progress. Fourth, because each of the five Great Lakes has unique environmental conditions, it will be difficult to establish measurable goals that reflect these differences and yet provide consistent basin-wide information. One restoration effort, the Chesapeake Bay Program, has developed measurable goals and a defined organizational structure that may offer valuable lessons for restoration efforts in the Great Lakes. Organizational leadership for setting goals and developing a monitoring system has yet to be realized for the Great Lakes. Several attempts at providing organizational leadership have not resulted in a stable structure for leading Great Lakes restoration efforts. We previously reported that, within the Great Lakes several entities are involved in coordinating and planning, which has resulted in confusion by federal and state officials as to which entity bears ultimate responsibility. We further reported that the responsibility for leading the U.S.’s Great Lakes efforts rests with GLNPO and that it is not fully exercising its authority under the Clean Water Act for coordinating Great Lakes restoration programs. We recommended GLNPO fulfill its coordinating responsibilities and develop an overarching Great Lakes restoration strategy. EPA promised to provide a detailed response to our recommendations, but has not yet done so. However, in 2003 an EPA official stated in congressional testimony that the Clean Water Act does require EPA, and more specifically GLNPO, to serve as the lead entity for coordinating the protection and restoration of the Great Lakes system. The same official stated in 2004 congressional testimony that our recommendations are answered by the Executive Order and again promised a detailed response to these recommendations. However, the Executive Order does not address our recommendations. As a result of the Executive Order issued in May 2004, which created a Great Lakes Interagency Task Force within EPA, how GLNPO’s leadership role and coordination responsibilities will be exercised in the future is unclear. Task force members include representatives from EPA, eight other federal agencies with Great Lakes program responsibilities, and the Council on Environmental Quality. Under the Executive Order, one of the purposes of the task force is to coordinate government action associated with the Great Lakes. The EPA Administrator chairs the task force that is also charged with developing outcome-based goals and collaborating with Canada and its provinces and with other binational bodies involved in the Great Lakes region regarding policies, strategies, projects, and priorities for the Great Lakes. The head of GLNPO, the Great Lakes National Program Manager, chairs the working group, and GLNPO staff are to assist both the task force and the working group in performing their duties. While the Executive Order addresses GLNPO’s role with respect to the task force and working group, it does not address GLNPO’s existing responsibilities under the Clean Water Act for coordinating EPA’s activities with other federal agencies and state and local authorities to meet GLWQA goals. The coordination role for the task force under the Executive Order is very similar to GLNPO’s coordination role under the Clean Water Act. However, because the Executive Order does not affect the statutory obligations of federal agencies, GLNPO is still under a statutory obligation to fulfill its coordination role. Moreover, under the Clean Water Act, GLNPO is required to not only develop but also implement specific action plans to carry out the responsibilities under GLWQA. However, according to the Executive Order, GLNPO will participate on a Great Lakes Regional Working Group that is responsible for coordinating and making recommendations for implementing the task force polices and strategies, but it will be the task force that actually implements recommendations. Existing coordination activities of USPC are also uncertain in light of the Executive Order. The USPC is focused on coordinating federal, state, and tribal government activities related to fulfilling the GLWQA, and it developed the Great Lakes Strategy 2002 to set restoration goals and actions. Membership on the USPC is similar to the newly formed working group in that it includes regional federal officials, and the GLNPO program manager chairs both groups and also serves as the Acting Assistant Administrator for EPA’s Office of Enforcement and Compliance Assurance. According to the Director of GLNPO, as of July 2004, when the last USPC semiannual meeting was held, there were no plans to change the role of the USPC. Therefore, the USPC, the task force working group, and GLNPO all seemingly are engaged in coordinating federal regional activities in the Great Lakes Basin. Coordinating Great Lakes research is another responsibility provided to the task force under the Executive Order, but other organizations have research responsibilities by statute. Specifically, NOAA’s Great Lakes Research Office, acting through the GLERL and other entities, is responsible under the Clean Water Act for conducting Great Lakes research and monitoring activities and annually reporting issues, on which Great Lakes research is needed, to the Congress. Each year GLERL and GLNPO are to prepare a joint research plan and to provide a health research report to the Congress. Thus far, GLERL and GLNPO have not prepared these plans or reported to the Congress because funds were not requested or provided for the coordination and reporting activities, according to agency officials. The GLERL Director stated that they have about 15 specific legislative mandates involving Great Lakes research. Coordinating and prioritizing research is also an activity of the IJC’s binational Council of Great Lakes Research Managers. This council, established in 1984, proposes priority research areas for the Great Lakes, and some of the proposals are priorities for GLERL, in part, because the council is currently co-chaired by the GLERL Director. Future councils, however, may not be co-chaired by the GLERL Director, and priority research areas may not be addressed because research managers are not bound to follow council priorities. Finally, the creation of the task force and working group by the Executive Order also raises questions about the permanency of this organizational structure for addressing the long-term restoration needs of the Great Lakes. Executive orders, such as the one creating the task force, stay in effect despite changes in administrations, but they may be amended or rescinded by a subsequent President. Moreover, the Executive Order cannot be enforced in court, unlike statutory provisions that can often be judicially enforced. Therefore, the task force may prove to be a temporary rather than a permanent attempt at coordinating and developing goals for the Great Lakes. Legislation was proposed in 2004 to enact the provisions of the Executive Order into law, but this legislation remains pending in the Congress. Many organizations participating in the restoration of the Great Lakes have independently developed goals for the Great Lakes Basin. However, these organizations have tended to develop goals independently of EPA and one another, resulting in duplicative efforts and the lack of prioritization of goals. We previously reported that the numerous restoration strategies containing goals developed by various organizations did not provide an overarching approach that can be used as a blueprint to guide overall restoration activities. The situation remains the same today with several organizations developing strategies and goals, without clearly defined leadership responsibilities to bring together or coordinate the various efforts. In some cases, the goals developed are very similar to each other. For example, the Council of Great Lakes Governors and the Great Lakes Commission both have similar goals relating to cleaning up of areas of concern and stopping the spread of invasive species. Yet, consensus has not been reached by the various organizations as to specifically how such goals should be measured. The leadership to coordinate goal setting efforts has not yet materialized. There is no one organization or group of organizations that is recognized as the leader. For example, at a Senate hearing on Great Lakes restoration efforts in 2003, the hearing chairman asked a panel of federal agency officials, including the Great Lakes National Program Manager, if there was an orchestra leader for the efforts in the Great Lakes, and none of the panel members volunteered a response. Similarly, during an IJC conference session in 2003, where the leadership for the various Great Lakes organizations was addressed, the Great Lakes National Program Manager stated that because of the number of groups involved in the Great Lakes, there is a need to find a way to work together toward goals; however, he was reluctant to lead this effort. The recently created Great Lakes Interagency Task Force was charged with establishing a process for collaboration among task force members to, among other things, develop outcome-based goals for the Great Lakes system. The desired outcomes are conditions such as cleaner water or sustainable fisheries. Federal and state program officials acknowledge that limited coordination of monitoring activities now exists and that there is no single organization in place to direct the coordination of monitoring efforts. One attempt to coordinate monitoring involving research vessels on the Great Lakes began in 1997, by the IJC’s Council of Great Lakes Research Managers. The impetus for this effort was that over 60 research vessels were operating independently in the basin without coordination or collaboration and with limited monitoring funds. Since that time the IJC has been developing an inventory of Great Lakes research vessels that was placed on a Web site designed to identify the ships, scientific equipment, general research schedules, and points of contact to aid in coordinating operations and sharing resources. The extent that this inventory has facilitated coordination has yet to be determined, however, coordination has begun through sharing of information on research vessels, according to an IJC official. Further, existing agreements on restoration goals and monitoring between Canada and its provincial governments of Ontario and Quebec will need to be considered in developing basin-wide goals if a joint U.S.-Canada monitoring system is to be developed as required under the GLWQA. Four of the five Great Lakes are shared by the United States and Canada and share many of the same environmental problems. The restoration goals and monitoring efforts developed in Canada to address these problems are important for a coordinated effort by the two countries. One set of goals to consider are in an agreement reached in 2002, between the governments of Canada and Ontario on overall goals and actions to be taken to protect, restore, and conserve the Great Lakes Basin ecosystem. This agreementthe Canada-Ontario agreementcontains four annexes that address areas of concern, harmful pollutants, lakewide management, monitoring, and information management. Each annex contains overall goals to be achieved over a 5-year period and results that the parties have agreed to achieve together or individually. For example, one result under the lakewide management annex is “reductions in the release of harmful pollutants on a lake-by-lake basis.” Another agreement containing goals that should be considered involves restoring the St. Lawrence River. This agreement—the St. Lawrence Action Plan—was reached in 1988, between officials of Canada and the province of Quebec and was a 5-year plan to address major problems of industrial pollution threatening natural habitats. While the St. Lawrence River is not geographically part of the Great Lakes Basin, it is the connecting channel from Lake Ontario to the Atlantic Ocean, and Quebec representatives participate in several of the organizations and activities involving the Great Lakes such as the BEC, SOLEC, and the Council of Great Lakes Research Managers. Since the first 5-year plan in 1988, subsequent 5-year agreements, referred to as phases, have focused on specific environmental priorities. The most recent agreement, Phase III, also referred to as the St. Lawrence Vision 2000, has three major objectives: protecting ecosystem and human health, involving riverside communities in the process of helping to make the St. Lawrence more accessible, and recovering its former uses. An updated agreement, Phase IV, was being developed as of July 2004. In addition to agreements, Canada and the two provinces have ongoing monitoring activities that provide information on environmental conditions in the Great Lakes Basin that will need to be considered in developing a joint basin-wide monitoring system. For example, the Ministry of the Environment, Ontario, conducts a Great Lakes nearshore monitoring and assessment program that contains five monitoring efforts. One of these involves sampling water quality at 66 sites within the basin on a rotating basis to determine how water quality is changing over time. Another component of the Ontario program is monitoring of Great Lakes tributaries for toxic contaminants. This monitoring is done to identify those tributaries to each lake having significant concentrations of persistent bioaccumulative substances, such as pesticides. In addition to monitoring conducted by the province of Ontario, monitoring and reporting is done by Conservation Authorities within the province. The Authorities consist of 36 local community-based organizations established by provincial legislation that manage watersheds throughout Ontario. The Authorities’ monitoring efforts are concentrated on tributary, stream, and inland areas of the Great Lakes Basin, and reports are issued to the public on the state of the watersheds. For the St. Lawrence River in Quebec, a monitoring component for the St. Lawrence Vision 2000 plan was developed by two Canadian federal agencies, the Quebec Ministry of Environment and a nongovernmental organization, to provide information on the environmental conditions in the St. Lawrence River Basin. The program began in 2003, with the four parties agreeing to conduct 21 monitoring activities until 2010, to analyze and report on the results. The 21 activities are ongoing activities by governmental organizations and were selected based on the descriptive information provided on St. Lawrence conditions. Several environmental issues are addressed, such as contamination of water, sediments, and biological resources by toxic substances. To better integrate the ongoing monitoring activities of the different organizations, the parties agreed to improve the spatial and temporal coverage of certain indicators, develop new indicators, and strive for better collaboration. In addition to efforts conducted by the provinces and others, EC conducts monitoring in open lake waters, connecting channels, and tributaries of the Great Lakes Basin. Open lake monitoring is conducted at various sites for ensuring compliance with GLWQA water quality objectives, evaluating trends, and identifying emerging issues. The monitoring focuses on two lakes each year, with the exception of Lake Michigan where it is the responsibility of the United States, to gather information on contaminants, nutrients, metals, and physical parameters at specific locations in each lake. Other monitoring programs involve pesticides and emerging chemicals monitoring in selected watersheds and embayments, and water quality monitoring of the Niagara, St. Lawrence, St. Clair, and Detroit Rivers. For example, the monitoring of the Niagara River is done as part of an agreement reached between EC, EPA, Ontario Ministry of Environment, and the New York Department of Environmental Conservation to reduce toxic chemical pollutants in the Niagara River. Monitoring is done at an upstream location near Lake Erie and downstream near Lake Ontario. There is currently no centralized repository of information on monitoring activities. As a result, it is difficult to coordinate existing data and determine what additional information is needed to establish baseline conditions and assess progress toward restoration goals. Two related efforts are, however, under way to develop inventories of the existing monitoring programs within the Great Lakes. One effort is being led by the Great Lakes Commission, funded by grants from the Joyce Foundation and GLNPO, to develop a comprehensive inventory of environmental monitoring programs in the Great Lakes Basin. Information is being gathered from existing sources and through surveys and interviews with program officials. The information will be placed in a database, analyzed to identify monitoring gaps in existing programs, and used by the BEC to develop a monitoring coordination framework, according to Great Lakes Commission officials. This project, however, was funded on a one-time basis and does not include plans for updating the inventory of monitoring data. A related effort is being conducted by GLNPO and EC under the direction of the BEC and is focused on developing an Internet-based inventory of existing monitoring systems. The inventory will not contain monitoring data, but rather a database of monitoring sources, referred to as metadata by GLNPO officials. The inventory of existing monitoring sources will rely on common data fields and terminology for standardization of information, and GLNPO plans to manage the database. To create the database, the BEC will request the various federal and state agencies and other organizations conducting monitoring activities to input information into the database, according to GLNPO officials. Ultimate responsibility for data completeness and quality rests with the BEC. However, it is unclear how this will be accomplished since the BEC has limited resources to carry out this responsibility. Further, since the input and annual update of monitoring information is voluntary, it is unclear how a complete and accurate inventory can be assured since there is no independent verification of the data. GLNPO officials stated that, as of July 2004, the Web-based system is developed, and they are awaiting organizations to enter information on monitoring systems into the database. While basin-wide goals are useful, existing goal-setting efforts are complicated by the unique characteristics of each lake. The physical magnitude of the basin is often recognized as a daunting challenge for setting measurable restoration goals. Although the Great Lakes are connected through rivers and channels, they are not one contiguous water body but rather distinct lakes with unique environmental conditions. The Great Lakes Basin area spans 750 miles and has multiple environmental challenges. This presents challenges to setting goals and developing a monitoring system that can be used to describe restoration progress across the basin and also capture the uniqueness of each lake. The distinct physical characteristics of the lakes are illustrated by the differences between Lakes Superior and Erie. (See fig. 3.) Lake Superior is a larger, deeper lake with a relatively sparse human population within its watershed. Most of the shoreline of Lake Superior is forested and not host to the extensive urban development along its shores that Lake Erie has. For Lake Superior, the overarching concern is to preserve current conditions and keep pollutants and invasive species from entering the lake. Lake Erie has other unique environmental problems, the most recent being the reemergence of a dead zone in the central basin of the lake that is void of oxygen and cannot support aquatic life. Recently, the phosphorus levels of the lake have exceeded acceptable levels as the result of unknown causes. Research efforts are now focused on determining the cause of the rise in phosphorous levels, which cause harmful algae blooms. Because Lake Erie is the shallowest of the Great Lakes and is subject to urban pressures, it is sometimes cited as the lake that first develops environmental problems within the Great Lakes Basin. The differences between the Great Lakes pose a challenge to setting basin- wide goals. While goals are needed to determine basin-wide progress, goals for each lake are also needed to address specific problems or public concerns for each lake. For Lake Superior, a major concern is stopping pollutants from entering the lake, which is addressed through a program that established a goal of zero-discharge for point source pollutants. For Lake Erie, goals developed by the Lake Erie Commission address other problems, such as how remediating contaminated sediments in Lake Erie’s harbors and tributaries. The future challenge will be how to build on the existing goal-setting efforts for each lake in developing measurable goals for the Great Lakes Basin as a whole. The Chesapeake Bay Program, a restoration effort lead by EPA, has demonstrated that quantifiable and prioritized goals with definitive time frames can be developed for measuring restoration progress. While the Great Lakes have unique challenges, such as coordination with Canada, the bay program also provides an example of how an organizational structure can be created to successfully coordinate goal setting. Unlike the restoration goals prepared for the Great Lakes, the Chesapeake Bay Program has specific, measurable goals with definitive time frames that are linked to indicators and a monitoring and modeling program. Overall goals developed for the program are stated in a general fashion similar to many developed for the Great Lakes and are to (1) address water quality and clarity problems caused by excess nutrients, sediments, and toxics; (2) maintain and restore living resources of the bay, such as controlling exotic species and protecting crabs and oysters; (3) protect and restore vital habitats, such as wetlands and submerged aquatic vegetation; (4) make sound land use decisions, such as land conservation; and (5) engage the community through education and outreach. However, the general goals are further defined as specific commitments that are used to measure program progress. As of December 2003, the program was endorsing over 40 measurable environmental commitments for the watershed. The program has prioritized commitments included in the most recent bay agreement, Chesapeake 2000, by identifying the 10 most important “keystone commitments” for the bay for focusing their efforts on critical needs and making the best use of resources and capabilities. For example, one keystone commitment for the overall goal of maintaining and restoring living resources in the bay, is that by 2010, at a minimum, a tenfold increase in native oysters should be achieved in the Chesapeake Bay, using a 1994 baseline. In addition, this commitment involves developing appropriate research and management strategies for attaining this increase. According to program officials, defining measurable goals and commitments up front is the key to the success of the Chesapeake Bay Program. If the goals are developed first, then they can be linked to the appropriate measurement and tracking activities and indicators to evaluate progress. Once program officials analyze the data collected from monitoring, modeling, and tracking programs to determine progress, they can decide on the appropriate actions to take to maintain or improve conditions. Officials from organizations involved in the restoration and protection of the bay agree that defining goals up front is important to the restoration effort and that the Chesapeake Bay Program has done a good job in this regard. For example, an official from the Chesapeake Bay Foundationthe largest conservation organization dedicated to saving the Chesapeake Bay watershedstated that the Chesapeake Bay Program does a good job in establishing clearly defined goals and commitments and linking them to indicators and monitoring to reflect the current overall conditions of the bay. In addition, State of Maryland officials from the Department of Environment and Department of Natural Resources stated that the goals and commitments of the program mirror those established by the state and that they are adequately linked to the monitoring and indicators used by the program. Recently, however, concerns were raised regarding how accurately the program’s computer model estimates projected reductions in nutrients. According to one program official, the controversy highlights the need for reaching consensus on appropriate measurement approaches and the need for peer review of all monitoring and modeling protocols. Finally, the program is an example of how a permanent organizational structure was established to set measurable goals and to coordinate restoration efforts. The organizational structure of the Chesapeake Bay Program is founded on an agreement between three states, the District of Columbia, and EPA with an executive council leading the program. This council consists of three governors, the Mayor of the District of Columbia, EPA’s Administrator, and a representative from the Chesapeake Bay Commission. The council establishes measurable program goals and commitments in such areas as water clarity after receiving input from several program committees and subcommittees. Restoration and monitoring efforts are coordinated by a number of written agreements between federal agencies and other organizations to focus resources in certain areas, such as an agreement between the FWS and EPA to provide technical assistance for various activities including habitat classification and mapping, resource assessments, and field surveys and inventories. A clearly defined organizational leadership structure is needed for restoring the Great Lakes and in particular for developing measurable basin-wide goals and a monitoring system as called for in the GLWQA and the Clean Water Act. Several organizations have offered basin-wide goals over the years, but none are guiding restoration efforts and measurable progress remains an elusive information component. The required monitoring system has not been fully developed and the vision of having information to guide restoration efforts remains unfulfilled. While the recent Executive Order creates a Great Lakes Interagency Task Force within EPA to develop measurable goals and coordinate federal activities, it is uncertain whether this task force will provide definitive, stable leadership needed over time because it may be readily changed by future executive orders. Additionally, while GLNPO has existing statutory responsibility for coordinating Great Lakes activities, it is unclear how its responsibilities and those of other organizations fit with the coordination activities of the new task force. EPA is now taking steps to implement the Executive Order; however, it is unclear whether this fulfills its responsibilities under the Clean Water Act. Absent a clearly defined leadership structure, setting measurable goals and monitoring progress in the Great Lakes is unlikely to be accomplished, and duplicative responsibilities for coordination, goal setting, and monitoring may be inevitable. EPA has recently demonstrated leadership on monitoring by developing an inventory of all monitoring activities in the Great Lakes. While we believe this is a worthwhile effort, controls should be in place to ensure the completeness and accuracy of the data in the inventory. In light of the uncertainty regarding how GLNPO’s responsibilities fit with the newly created Great Lakes Interagency Task Force and to help ensure the coordination of U.S. efforts in developing basin-wide measurable restoration goals for the Great Lakes, as well as the development of a joint monitoring system based on those goals, the Congress may want to consider clarifying whether GLNPO or the task force should lead the U.S. efforts in restoring the Great Lakes and requiring this entity, in consultation with Canada, the governors of the Great Lakes states, federal agencies, and other organizations, to develop and prioritize specific measurable restoration goals for the Great Lakes Basin within a certain time frame; and requiring the entity to develop and implement monitoring activities to measure progress toward attaining goals and identify actions that could assist in achieving these goals. If the Congress decides that the task force should have the leadership role, it may also want to consider whether additional Great Lakes Basin stakeholders should be task force members, such as representatives of states and other organizations. To facilitate the coordination of monitoring activities by the various federal, state, and other organizations within the Great Lakes Basin, we recommend that the EPA Administrator direct GLNPO to develop adequate controls for the inventory of monitoring systems to ensure that inventory data is accurate, current, and complete so as to facilitate users’ efforts to coordinate monitoring activities. GAO provided EPA with a draft of this report for its review and comment. The agency generally agreed with the findings and recommendations in the report. EPA stated that the inventory of monitoring activities is a critical component for monitoring and reporting efforts, and adequate controls are needed to ensure that data are accurate, current, and complete in order to facilitate users’ efforts to coordinate monitoring activities. Accordingly, EPA stated it has begun taking steps to develop these controls. Specifically, GLNPO will lead the U.S. efforts to track entries into the inventory database to ensure that data from all agencies are included. GLNPO will also request annual verification and updating by organizations of their information to ensure that the database is accurate and current. If effectively implemented, these steps should help ensure the accuracy and usefulness of the inventory for coordination purposes. Regarding our matter for the Congress to consider clarifying leadership responsibilities, EPA stated that it believes the responsibilities for organizational leadership in the Great Lakes for both GLNPO and Great Lakes Interagency Task Force are clearly stated in the Clean Water Act and the Executive Order, respectively. While EPA describes the overall structure and responsibilities of the task force and GLNPO to support its position, it does not address our concern that similar coordination responsibilities are assigned to different organizations under the Executive Order and the Clean Water Act. EPA states that the Executive Order appoints the Great Lakes National Program Manager as chair of the Great Lakes Regional Working Group and that this will enhance GLNPO’s ability to meet its statutory obligation to coordinate federal restoration activities. However, this does not address our point that the Clean Water Act assigns GLNPO the responsibility of implementing specific action plans to carry out U.S. responsibilities under the act, while under the Executive Order, it is the task force, not GLNPO that will implement recommendations of the working group. Further, EPA did not address our concern that the task force does not provide the definitive, stable leadership that is needed over time given that its responsibilities may be changed by future executive orders. The full text of EPA’s comments is included in appendix VII. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to appropriate Congressional Committees; the EPA Administrator; various other federal departments and agencies; and the International Joint Commission. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please call me at (202) 512-3841. Key contributors to this report are listed in appendix VIII. To determine the extent to which information derived from monitoring is useful for assessing overall conditions in the Great Lakes Basin, we gathered and analyzed information on efforts to develop indicators through the State of the Lakes Ecosystem Conferences (SOLEC), which is a jointly sponsored effort by EPA’s Great Lakes National Program Office (GLNPO) and Environment Canada (EC). We also gathered and analyzed information on monitoring activities obtained from state agency officials in each of the eight Great Lakes statesIllinois, Indiana, Ohio, Michigan, Minnesota, New York, Pennsylvania, and Wisconsin; eight federal agencies; two Canadian federal agencies; and provincial agencies in Ontario and Quebec, Canada. For each agency, we obtained information about ongoing monitoring efforts including the purpose of the monitoring efforts, type of information collected during monitoring, how the information was analyzed and used, and how monitoring was coordinated with other federal or state agencies. A detailed listing of the federal, state, and Canadian agencies that provided monitoring information is included as appendix II. We reviewed the monitoring requirements contained in the Great Lakes Water Quality Agreement (GLWQA) and compared these requirements with the ongoing monitoring activities. To identify existing restoration goals and whether monitoring is done to track goal progress, we obtained and analyzed Great Lakes restoration goals prepared by several organizations including the Council of Great Lakes Governors, Great Lakes Commission, Great Lakes United, and U.S. Policy Committee. We analyzed the goals contained in the Great Lakes Strategy 2002 and reviewed information on monitoring the progress in achieving the goals. We further reviewed the restoration goals and monitoring efforts contained in Lakewide Management Plans (LaMP) prepared for four of the five Great Lakes. We interviewed LaMP managers to determine the process followed for setting goals and related monitoring activities. We also interviewed officials conducting the monitoring for the Great Lakes Strategy 2002 and reviewed monitoring progress reports. To identify major challenges to setting restoration goals and developing a monitoring system for the Great Lakes, we identified barriers to accomplishing these tasks and gathered information on four major challenges involving organizational responsibilities, coordination of monitoring activities with Canada, centralized information on monitoring activities, and unique lake environmental conditions. We gathered and analyzed information on existing organization responsibilities, including those established by the GLWQA, statutes, and administrative decisions, along with the organizational responsibilities set forth in a May 2004 executive order. We interviewed officials and gathered information from EC, the Ontario Ministry of Natural Resources and Ministry of the Environment, and the Quebec Ministry of Environment to identify their ongoing monitoring activities and challenges to Canada’s participation in developing and implementing a comprehensive monitoring system for the Great Lakes. We identified and analyzed efforts for inventorying and coordinating monitoring activities in the Great Lakes Basin and obtained and analyzed information on a proposed Web based inventory of monitoring efforts from GLNPO officials. We obtained and analyzed documentation about the environmental conditions for each of the Great Lakes and discussed with federal and state officials the difficulties in developing a basin-wide monitoring system. Finally, we gathered information on goals, monitoring, and the organizational structure for the Chesapeake Bay Program. We interviewed program, state, and nonprofit officials about how goals were developed, monitored, and results communicated. We performed our work from August 2003 to May 2004 in accordance with generally accepted government auditing standards. Nearly all of the officials we contacted endorsed the need for a comprehensive surveillance and monitoring system and their comments include why a system is needed or factors to consider in developing a system. See table 1 for a summary of these comments. Appendix IV: State of Ohio Lake Erie Programs and Initiatives with Monitoring Activities Analysis of sport fish caught in Ohio waters for toxins; results are basis for fish consumption advisories. State funded program, state administered. Biennially assess Ohio’s water bodies and report the status of impaired waters. Federally requirement, jointly funded by federal and state; administered by stste. Protect impaired or threatened waters by developing total maximum daily load limits by 2013. Federally requirement, jointly funded by federal and state; administered by state. Conduct nonpoint pollution abatement program with focus on urban, residential, and commercial sources. State initiated, jointly funded by federal and state. Long-term program to reduce phosphorus loading into Lake Erie. Joint federal and state funded program; administered by state. State initiated and funded. Indices measuring the health of streams based on health and diversity of aquatic communities. State initiated jointly funded by federal and state. Monitor swimming beaches for fecal bacteria contamination using E. coli as test organism. Joint federal and state funded program; administered by state. An analysis of water samples collected within the Lake Erie basin to assess sediment, nutrient, and metal compositions. State initiated and funded. The Lake Erie Lakewide Management Plan (LaMP) contains goals stated as four ecosystem management objectives focused on land use, nutrients, aquatic and terrestrial species, and contaminants. For example, one objective addressing contaminants is that toxic chemical and biological contaminant loadings within the basin must decline to a level that would permit sustainable use of natural resources. Each of the objectives have two to four subobjectives that along with the objectives, are not expressed in quantitative terms, priorities, or with established time frames. One subobjective under the contaminants objective is that toxic substances shall not exist in amounts detrimental to human health or wildlife and that exotic species should be prevented from colonizing the ecosystem, controlled where feasible, and reduced to a point where they do not impair the ecological function of Lake Erie. The plan does not state how progress in achieving these objectives will be tracked or when the objectives should be met. According to the plan, indicators were discussed but not selected by a LaMP working group, and tracking progress toward goals will not begin until indicators are selected. While indicators were not selected for the LaMP, the LaMP stated that extensive monitoring activities were ongoing and that an inventory conducted by Environment Canada showed that there were over 90 independent monitoring programs under way within the Lake Erie Basin. According to the LaMP, the indicators ultimately chosen will determine whether current monitoring will continue or new monitoring efforts will be initiated. The Lake Michigan LaMP sets forth one overall goalto restore and protect the integrity of the Lake Michigan ecosystem through collaborative partnerships—and 11 subgoals. These subgoals are stated as general questions, such as “can we drink the water,” or “can we swim in the water,” with additional detail on the status of reaching the subgoal, challenges, and key steps to be taken to achieve the subgoal’s target. However, while these subgoals and key steps do contain some quantitative information and time frames, they are not prioritized and cannot be linked to indicators and monitoring so that progress under the subgoal can be measured. For example, under the subgoal “can we swim in the water,” the LaMP states that there were 206 beach closures in 2000, and progress toward reaching the goal is “mixed.” It further identifies a challenge to develop real-time beach monitoring and that, in 2004, the Great Lakes states should adopt criteria, standards, and monitoring programs for beach bacteria. The LaMP acknowledges that goals need to be linked to indicators and then to a monitoring strategy for tracking restoration progress. However, according to the LaMP Program Manager, the selection of indicators for Lake Michigan is still in process, and the scope of monitoring efforts being conducted in the Lake Michigan basin needs to be determined and coordinated. As a first step in developing a coordinated strategic monitoring plan, a monitoring groupthe Lake Michigan Monitoring Coordination Councilhas an effort under way to determine ongoing monitoring activities in Lake Michigan at the state and federal levels, according to the official. For Lake Ontario, U.S. and Canadian officials derived the LaMP’s three overall ecosystem goals from an earlier planthe Lake Ontario Toxics Management Planthat was prepared in the late 1980s. For example, one goal derived from the plan for the LaMP is “to maintain the Lake Ontario ecosystem, and as necessary, restore or enhance it to support self- reproducing and diverse biological communities.” Under the three overall ecosystem goals, the LaMP also included the management plan’s ecosystem objectives in five areas: aquatic communities, wildlife, human health, habitat, and stewardship. These objectives describe in general terms the conditions necessary to achieve the overall ecosystem goals, but they are not stated in quantitative terms, prioritized, and do not contain time frames. The Lake Ontario LaMP also contains 11 indicators based on the Lake Ontario Toxics Management Plan and State of the Lakes Ecosystem Conference indicator work. According to the LaMP, most indicator monitoring needs are being met with existing monitoring programs, but further monitoring efforts are planned to provide a more complete assessment of lake conditions. The LaMP states that now that indicators have been adopted, U.S. and Canadian officials will work to develop a “cooperative monitoring” approach for promoting increased communication and coordination between their monitoring programs. The Lake Superior LaMP differs from other LaMPs in that it was developed from an ongoing programthe Lake Superior Binational Program. This program was established in 1991 to restore and protect Lake Superior, and it is a partnership between the United States; Canada; the states of Minnesota, Wisconsin and Michigan; and the province of Ontario and tribal government representatives that develop policies through a number of task forces, workgroups, and committees. The LaMP is one of the products developed by the program. The LaMP focuses on six areas: critical pollutants, habitat, terrestrial wildlife communities, aquatic communities, human health, and lake basin sustainability. While these areas are not prioritized, for critical pollutants, the LaMP provides specific, measurable goals for reducing nine bioaccumulative toxic chemicals. For each chemical, a 1990 baseline amount was established, along with targets, for chemical load reductions to be achieved every 5 years. For example, reducing mercury sources 60 percent by 2000, 80 percent by 2010, and a 100 percent by 2020. Similar goals are set for the other pollutants. While the goals are specific, the description of the monitoring process to measure progress is less specific with little detail on the monitoring required to measure progress toward goals. For the critical pollutants, a menu of possible monitoring activities is mentioned, and the LaMP states that more work is needed to develop a coordinated monitoring program to evaluate progress toward goals and that data from state sources is needed for measuring progress. According to Minnesota officials responsible for tracking progress, they have difficulty collecting information from state regulatory agencies and, therefore, do not have sufficient information to measure progress toward reaching goals. They added that funds are not available for the monitoring needed to measure progress. The goals for the other five areas in the Lake Superior LaMP are not as specific and do not link indicators and monitoring to goals leaving unclear how progress toward goals will be measured. For example, the LaMP lists several strategies for pursuing sustainability, such as developing recycling programs and attracting industries that use recycled material but no quantitative information, prioritization, or time frames are given for these strategies. The LaMP mentions several indicators that have been developed to track progress in promoting sustainability, however, these are not linked to specific measurable goals. Sustainability indicators will be used, according to the LaMP, to assess how fully the Binational Program’s vision statement is being realized. Ecosystem indicators for aquatic and terrestrial species are still under development. Ensure the sustainable use of water resources while confirming that the Great Lakes states retain authority over water use and diversion of Great Lakes waters. Promote programs to protect human health against adverse effects of pollution in the Great Lakes ecosystem. Control pollution from diffuse sources into the water, land, and air. Continue to reduce the introduction of persistent bioaccumulative toxics into the Great Lakes ecosystem. Stop the introduction and spread of non-native aquatic invasive species. Enhance fish and wildlife by restoring and protecting coastal wetlands, fish, and wildlife habitats. Restore to environmental health the areas of concern identified by the International Joint Commission as needing remediation. Standardize and enhance the methods by which information is collected, recorded, and shared within the region. Adopt sustainable use practices that protect environmental resources and may enhance the recreational and commercial value of our Great Lakes. Restore and maintain beneficial uses in each of the 31 U.S. and binational areas of concern or “toxic hot spots,” with a special emphasis on remediation of contaminated sediment. Restore and protect the ecological and economic health of the Great Lakes by preventing the introduction of new invasive species and limiting the spread of established ones. Improve Great Lakes water quality and economic productivity by controlling nonpoint source pollution from water, land, and air pathways. Restore 100,000 acres of wetlands and critical coastal habitat while protecting existing high quality fish and wildlife habitat in the Great Lakes Basin. Ensure the sustainable use and management of Great Lakes water resources to protect environmental quality and provide for water-based economic activity in the Great Lakes states. Meet domestic and international Great Lakes commitments through adequate funding for, and the efficient and targeted operation of, federally funded and management and research agencies. Maximize the commercial and recreational value of Great Lakes waterways and other coastal areas by maintaining and constructing critical infrastructure and implementing programs for sustainable use. Lists five areas where action is needed, such as funding toxic cleanups, coordinating cleanup efforts, and treating contaminants. Lists seven areas where action is needed, such as design of manufacturing products, minimizing resource extraction, and planning and managing food production and agriculture in relation to the surrounding ecosystem. List five areas where action is needed, such as promoting energy efficiency, conservation, and renewable energy sources. Sustainable Water Quantities and Flows Action Agenda Lists eight areas where action is needed such as implementing water withdrawal reform and restoring basin ecosystem functions damaged or lost due to harmful water withdrawal practices. Protecting and Restoring Species Action Agenda Lists 13 areas where action is needed to address invasive aquatic and terrestrial species, and protect threatened species. Protecting and Restoring Habitats Action Agenda Lists 24 areas where action is needed to protect and restore aquatic, forest, urban, and interconnecting habitats; and limit sprawl. John B. Stephenson (202) 512-3841 ([email protected]) In addition to the individual named above, Willie Bailey, Greg Carroll, Nancy Crothers, John Delicath, Michael Hartnett, Karen Keegan, Amy Webbink, and John Wanska made key contributions to this report. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
The Great Lakes remain environmentally vulnerable, prompting the United States and Canada to agree on actions to preserve and protect them. As requested, this report (1) determines the extent to which current EPA monitoring efforts provide information for assessing overall conditions in the Great Lakes Basin, (2) identifies existing restoration goals and whether monitoring is done to track goal progress, and (3) identifies the major challenges to setting restoration goals and developing a monitoring system. Current Environmental Protection Agency (EPA) monitoring does not provide the comprehensive information needed to assess overall conditions in the Great Lakes Basin because the required coordinated joint U.S./Canadian monitoring program has not been fully developed. Information collected from monitoring by other federal and state agencies does not, by design, provide an overall assessment of the Great Lakes because it is collected to meet specific program objectives or limited to specific geographic areas. Multiple restoration goals have been proposed through efforts by EPA and other organizations. EPA developed basin-wide goals through its Great Lakes Strategy 2002 and goals for plans addressing individual lakes. Other organizations have also identified basin-wide restoration goals and priorities. Monitoring of progress toward goals is generally limited to tracking specific action items proposed in the Great Lakes Strategy 2002; other proposed goals are generally not monitored to determine progress. Efforts to coordinate basin-wide goals and a monitoring system face several challenges. The lack of clearly defined organizational leadership poses a major obstacle. Both EPA's Great Lakes National Program Office (GLNPO) and a newly created interagency task force have coordination roles raising uncertainty as to how leadership and coordination efforts will be exercised in the future. Second, coordinating existing restoration goals and monitoring activities among the many participating organizations within the United States, and between the United States and Canada is a significant challenge. Third, centralized information from monitoring activities is not yet available, making it difficult to assess restoration progress. In addition, an inventory system developed by EPA and Canada may not have adequate controls on voluntarily provided information.
The National Industrial Security Program was established in 1993 for the protection of classified information. DSS administers the National Industrial Security Program on behalf of DOD and 23 other federal departments and agencies. DSS is responsible for providing oversight, advice, and assistance to more than 11,000 U.S. contractor facilities that are cleared for access to classified information. Contractor facilities can range in size, be located anywhere in the United States, and include manufacturing plants, laboratories, and universities. About 221 industrial security representatives work out of 25 DSS field offices across the United States and serve as the primary points of contact for these facilities. DSS is responsible for ensuring that these contractors meet requirements to safeguard classified information under the National Industrial Security Program. Contractors must have facility security clearances under this program before they can work on classified contracts. To obtain a facility security clearance, contractors are required to self- report foreign business transactions on a Certificate Pertaining to Foreign Interests form. Examples of such transactions include foreign ownership of a contractor’s stock, a contractor’s agreements or contracts with foreign persons, and whether non-U.S. citizens sit on a contractor’s board of directors. DSS’s industrial security representatives provide guidance to contractors on filling out the certificate. If a contractor declares no foreign business transactions on the certificate, DSS places the certificate in the contractor’s file located in the field. When U.S. contractors with facility security clearances have changes in foreign business transactions to report, they are required to complete the certificate again and resubmit it every 5 years, even if no foreign transactions take place. Because a U.S. company can own a number of contractor facilities, the corporate headquarters or another legal entity within that company is required to complete the certificate. When contractors declare foreign transactions on their certificates and notify DSS, industrial security representatives are responsible for ensuring that contractors properly identify all relevant foreign business transactions. They are also required to collect, analyze, and verify pertinent information about these transactions. For example, by examining various corporate documents, the industrial security representatives can determine corporate structures and ownership and identify key management officials. The representatives may consult with DSS counterintelligence officials, who can provide information about threats to U.S. classified information. If contractors’ answers on the certificates indicate that foreign transactions meet certain DSS criteria or exceed thresholds, such as the percentage of company stock owned by foreign persons, the representatives forward these FOCI cases to DSS headquarters. DSS headquarters works with contractors to determine what, if any, protective measures are needed to reduce the risk of foreign interests gaining unauthorized access to U.S. classified information. DSS field staff are then responsible for monitoring contractor compliance with these measures. Figure 1 shows highlights of the FOCI process. On a case-by-case basis, DSS headquarters can approve the use by contractors of one of six types of protective measures: voting trust agreements, proxy agreements, special security agreements, security control agreements, board resolutions, and limited facility clearances. These protective measures are intended to insulate contractor facilities from undue foreign control and influence and to reduce the risk of unauthorized foreign access to classified information. Protective measures vary in the degree to which foreign entities are insulated from classified information and are not intended to deny foreign owners the opportunity to pursue business relationships with their U.S.-based contractor facilities working on classified contracts. Table 1 provides a general description of each of these protective measures. In addition to these measures, DSS can also require contractors to take certain actions to mitigate specific FOCI situations such as termination of loan agreements or elimination of debt owed to a foreign entity. For contractors operating under voting trust, proxy, special security, or security control agreements, industrial security representatives are supposed to conduct annual FOCI meetings with contractor staff who are responsible for ensuring compliance with these protective measures. In preparation for these annual meetings, contractors are required to produce and submit to DSS annual FOCI compliance reports that can describe specific acts of noncompliance with protective measures, changes in organizational structure or changes in security procedures at the contractor, and other issues that have occurred over the course of a year. Industrial security representatives should then review the reports to determine how contractors are fulfilling their obligations under the protective measures. In addition, DSS generally conducts security reviews annually for facilities that store classified information or every 18 months for facilities that do not have classified information on site. However, for contractors operating under voting trust, proxy, special security, or security control agreements, industrial security representatives are required to conduct a security review every 12 months whether the contractor has classified information on site or not. These reviews are designed to determine security vulnerabilities and contractor compliance with National Industrial Security Program requirements and to evaluate the overall quality of the facility’s security program, including compliance with protective measures to mitigate FOCI. DSS will not grant a new facility security clearance to a contractor until all relevant FOCI have been mitigated. In addition, DSS shall suspend an existing clearance if FOCI at a contractor facility has not been mitigated. A contractor with a suspended facility clearance can continue to work on an existing classified contract unless the government contracting office denies access to the existing contract. In addition, the contractor cannot be awarded a new classified contract until the clearance is restored. DSS does not systematically ask for, collect, or analyze foreign business transactions in a manner that helps it properly oversee contractors entrusted with U.S. classified information, nor does DSS aggregate and analyze information to determine the overall effectiveness of its oversight of FOCI contractors. Notably, DSS does not know if contractors are reporting foreign business transactions as they occur and lacks knowledge about how much time a contractor facility with unmitigated FOCI has access to classified information. Figure 2 shows a general description of gaps in DSS knowledge about the FOCI process. Furthermore, DSS field staff said they lack research tools and sufficient training regarding the subject of foreign transactions and have indicated challenges with regard to staff turnover. DSS does not systematically ask for information that would allow it to know if contractors are reporting certain foreign business transactions when they occur, which begins the process for reducing FOCI-related security risks. DSS industrial security representatives are responsible for advising contractors that timely notification of foreign business transactions is essential. The National Industrial Security Program Operating Manual requires contractors with security clearances to report any material changes of foreign business transactions previously notified to DSS but does not specify a time frame for doing so. DSS is dependent on contractors to self-report transactions by filling out the Certificate Pertaining to Foreign Interests form, but this form does not ask contractors to provide specific dates for when foreign transactions took place. In addition, DSS does not compile or analyze how much time passes before DSS becomes aware of foreign business transactions. DSS field staff told us that some contractors report foreign business transactions as they occur, while others report transactions months later, if at all. During our review, we found a few instances in which contractors were not reporting foreign business transactions when they occurred. One contractor did not report FOCI until 21 months after awarding a subcontract to a foreign entity. Another contractor hired a foreign national as its corporate president but did not report this transaction to DSS, and DSS did not know about the FOCI change until 9 months later, when the industrial security representative came across the information on the contractor’s Web site. In another example, DSS was not aware that a foreign national sat on a contractor’s board of directors for 15 months until we discovered it in the process of conducting our audit work. Without timely notification from contractors, DSS cannot track when specific foreign business transactions took place and therefore is not in a position to take immediate action so that FOCI is mitigated, if necessary. In addition, DSS does not determine the time elapsed from reporting of foreign business transactions by contractors with facility clearances to the implementation of protective measures or when suspensions of facility clearances occur. Without protective measures in place, unmitigated FOCI at a cleared contractor increases the risk that foreign interests can gain unauthorized access to U.S. classified information. During our review, we found two cases in which contractors appeared to have operated with unmitigated FOCI before protective measures were implemented. For example, officials at one contractor stated they reported to DSS that their company had been acquired by a foreign entity. However, the contractor continued operating with unmitigated FOCI for at least 6 months. In the other example, a foreign-purchased contractor continued operating for 2 months with unmitigated FOCI. Contractor officials in both examples told us that their facility clearances were not suspended. According to the National Industrial Security Program Operating Manual, DSS shall suspend the facility clearance of a contractor with unmitigated FOCI. DSS relies on field office staff to make this determination. Because information on suspended contractors with unmitigated FOCI is maintained in the field, DSS headquarters does not determine at an aggregate level the extent to which and under what conditions it suspends contractors’ facility clearances due to unmitigated FOCI. DSS does not centrally collect and analyze information to determine the magnitude of contractors under FOCI and assess the effectiveness of its oversight of those contractors. For example, DSS does not know how many contractors under FOCI are operating under all types of protective measures and, therefore, does not know the extent of potential FOCI- related security risks. Although DSS tracks information on contractors operating under some types of protective measures, it does not centrally compile data on contractors operating under all types of protective measures. Specifically, DSS headquarters maintains a central repository of data on contractors under voting trust agreements, proxy agreements, and special security agreements—protective measures intended to mitigate majority foreign ownership. However, information on contractors under three other protective measures—security control agreements, limited facility clearances, and board resolutions—are maintained in paper files in the field offices. DSS does not aggregate data on contractors for all six types of protective measures and does not track and analyze overall numbers. In addition, DSS does not conduct overall analysis of foreign business transactions reported by contractors on their Certificate Pertaining to Foreign Interests forms or maintain aggregate information for contractors’ responses. Consequently, DSS does not know the universe of FOCI contractors operating under protective measures, and DSS cannot determine the extent to which contractors under FOCI are increasing or if particular types of foreign business transactions are becoming more prevalent. This information would help DSS target areas for improved oversight. According to DSS officials, centralizing and tracking information on contractors under all types of measures would require more resources because information is dispersed in paper files in DSS field offices around the country. DSS does not systematically compile and analyze trends from its oversight functions to identify overall compliance trends or concerns with implementation of protective measures by contractors. DSS industrial security representatives are responsible for ensuring compliance of FOCI contractors under certain protective measures through annual FOCI meetings where they discuss contractors’ compliance reports. Industrial security representatives notify headquarters of the results of the meetings and place compliance reports and their own assessments in paper files located in field offices. However, DSS headquarters does not use annual compliance reports to assess trends to evaluate overall effectiveness of the FOCI process. Finally, the use of protective measures at FOCI contractor facilities was designed in part to counter attempts to gather classified information through unauthorized means. DSS does not assess trends from its own counterintelligence data or information gathered by other intelligence agencies to evaluate whether protective measures are effectively mitigating FOCI risk across the board. For example, a 2004 DSS counterintelligence report states that foreign information targeting through e-mail and Internet communication and collection methods is on the rise. However, according to DSS officials, not all protective measures at FOCI contractors include provisions to monitor e-mail or other Internet traffic. By assessing counterintelligence trends to analyze the effectiveness of protective measures in countering foreign information collection attempts, DSS could identify weaknesses in its protective measures and adjust them accordingly. DSS’s field staff face numerous challenges: complexities in verifying FOCI cases, limited tools to research FOCI transactions, insufficient FOCI training, staff turnover, and inconsistencies in implementing guidance on FOCI cases. For industrial security representatives, verifying if a contractor is under FOCI is complex. Industrial security representatives cited various difficulties verifying FOCI information. To verify if a contractor is under FOCI, industrial security representatives are required to understand the corporate structure of the legal entity completing the Certificate Pertaining to Foreign Interests form and evaluate the types of foreign control or influence that exist for each entity within a corporate family. DSS officials informed us that tracing strategic company relationships, country of ownership, and foreign affiliations and suppliers, or reviewing corporate documentation—such as loan agreements, financial reports, or Securities and Exchange Commission filings—is complicated. For example, representatives are required to verify information on stock ownership by determining the distribution of the stock among the stockholders and the influence or control the stockholders may have within the corporation. This entails identifying the type of stock and the number of shares owned by the foreign person(s) to determine their authority and management prerogatives, which DSS guidance indicates may be difficult to ascertain in certain cases. According to DSS field officials, verifying information is especially difficult when industrial security representatives have limited exposure to FOCI cases. In some field offices we visited, industrial security representatives had few or no FOCI cases and, therefore, had limited knowledge about how to verify foreign business transactions. Some industrial security representatives in one field office told us they do not always have the tools needed to verify if contractors are under FOCI. As part of their review process, industrial security representatives are responsible for verifying what a contractor reports on its Certificate Pertaining to Foreign Interests form and determining the extent of foreign interests in the company. Industrial security representatives conduct independent research using the Internet or return to the contractor for more information to evaluate the FOCI relationships and hold discussions with management officials, such as the chief financial officer, treasurer, and legal counsel. DSS headquarters officials told us additional information sources, such as the Dun and Bradstreet database of millions of private and public companies are currently not available in the field. However, some industrial security representatives stated that such additional resource tools would be beneficial for verifying complex FOCI information. In addition, industrial security representatives stated they lacked the training and knowledge needed to better verify and oversee contractors under FOCI. For example, DSS does not require its representatives to have financial or legal training. While some FOCI training is provided, representatives largely depend on DSS guidance and on-the-job training to oversee a FOCI contractor. In so doing, representatives work with more experienced staff or seek guidance, when needed, from DSS headquarters. In a 1999 review, DSS recognized that recurring training was necessary to ensure industrial security representatives remain current on complex FOCI issues and other aspects of the FOCI process. DSS headquarters officials said that they have held regionwide meetings where they discussed FOCI case scenarios and responded to questions about the FOCI process. However, we found that the training needs on complex FOCI issues are still a concern to representatives. In fact, many said they needed more training to help with their responsibility of verifying FOCI information, including how to review corporate documents, strategic company relationships, and financial reports. DSS field officials said the DSS training institute currently offers a brief training unit on FOCI covering basic information. DSS established a working group of DSS field and headquarters staff to look at ways to improve the training program, including more specific FOCI training. The group submitted recommendations in March 2005 to field managers for their review. DSS is also planning to work with its training institute to develop additional FOCI courses to better meet the needs of the industrial security representatives. According to field staff, industrial security representatives operate in an environment of staff turnover, which can affect their in-depth knowledge of FOCI contractors. Officials from one-third of the field offices we reviewed noted staff retention problems. DSS officials at two of these field offices said that in particular they have problems retaining more experienced industrial security representatives. Field officials said that when an industrial security representative retires or leaves, the staff member’s entire workload is divided among the remaining representatives, who already have a substantial workload. In addition, DSS guidance advises field office officials to rotate contractor facilities among industrial security representative every 3 years, if possible, as a means of retaining DSS independence from the contractors. DSS officials told us the rotation can actually occur more frequently because of staff turnover. DSS headquarters officials said they are formulating a working group to help improve staff retention in the field. Compounding these challenges are inconsistencies among field offices in how industrial security representatives said they understood and implemented DSS guidance for reviewing contractors under FOCI. For example, per DSS guidance, security reviews and FOCI meetings should be performed every 12 months for contractors operating under special security agreements, security control agreements, voting trust agreements, and proxy agreements. However, we found that some industrial security representatives were inconsistent in implementing the guidance. For example, one representative said a contractor under a special security agreement was subject to a security review every 18 months because the contractor did not store classified information on-site. In addition, two industrial security representatives told us they did not conduct annual FOCI meetings for contractors that were operating under a proxy agreement and security control agreement, respectively. We also found that industrial security representatives varied in their understanding or application of DSS guidance for when they should suspend a contractor’s facility clearance when FOCI is unmitigated. The guidance indicates that when a contractor with a facility clearance is determined to be under FOCI that requires mitigation by DSS headquarters, the facility security clearance shall be suspended until a protective measure is implemented. However, we were told by officials in some field offices that they rarely suspend clearances when a contractor has unmitigated FOCI as long as the contractor is demonstrating good faith in an effort to provide documentation to DSS to identify the extent of FOCI and submits a FOCI mitigation plan to DSS. Officials in other field offices said they would suspend a contractor’s facility clearance once they learned the contractor had unmitigated FOCI. The protection of classified information has become increasingly important in light of the internationalization of multibillion-dollar cooperative development programs, such as a new-generation fighter aircraft, and a growing number of complex cross-border industrial arrangements. Although such developments offer various economic and technological benefits, there can be national security risks when foreign companies control or influence U.S. contractors with access to classified information. Given the growing number of DOD contractors with connections to foreign countries, it is critical for DSS to ensure that classified information is protected from unauthorized foreign access. In carrying out its responsibilities, DSS is dependent on self-reported information from the contractors about their foreign activities, creating vulnerabilities outside of DSS’s control. Within this environment, unless DSS improves the collection and analysis of key information and provides its field staff with the training and tools they need to perform FOCI responsibilities, DSS will continue to operate without knowing how effective its oversight is at reducing the risk of foreign interests gaining unauthorized access to U.S. classified information. To improve knowledge of the timing of foreign business transactions and reduce the risk of unauthorized foreign access to classified information, we recommend that the Secretary of Defense direct the director of DSS to take the following three actions: clarify when contractors need to report foreign business transactions determine how contractors should report and communicate dates of specific foreign business transactions to DSS, and collect and analyze when foreign business transactions occurred at contractor facilities and when protective measures were implemented to mitigate FOCI. To assess overall effectiveness of DSS oversight of contractors under FOCI, we recommend that the Secretary of Defense direct the director of DSS to take the following three actions: collect and analyze data on contractors operating under all protective measures as well as changes in types and prevalence of foreign business transactions reported by contractors; collect, aggregate, and analyze the results of annual FOCI meetings, contractors’ compliance reports, and data from the counterintelligence community; and develop a plan to systematically review and evaluate the effectiveness of the FOCI process. To better support industrial security representatives in overseeing contractors under FOCI, we recommend the Secretary of Defense direct the director of DSS to formulate a human capital strategy and plan that would encompass the following two actions: evaluate the needs of representatives in carrying out their FOCI responsibilities and determine and implement changes needed to job requirements, guidance, and training to meet FOCI responsibilities and explore options for improving resource tools and knowledge-sharing efforts among representatives. In commenting on a draft of our report, DOD disagreed with our conclusions that improvements are needed to ensure sufficient oversight of contractors under FOCI, and it also disagreed with our recommendations to improve oversight. Overall, DOD’s comments indicate that it believes that the actions DSS takes when it learns of FOCI at contractors is sufficient. However, DOD has not provided evidence necessary to support its assertions. In fact, we found two cases in which contractors appeared to have operated with unmitigated FOCI before protective measures were put into place. Unmitigated FOCI at contractors increases the risk that foreign interests can gain unauthorized access to U.S. classified information. Further, DOD states that we did not establish a link between collecting and analyzing FOCI data and the effectiveness of DSS’s oversight or the protection of classified information. We found that DSS lacks fundamental FOCI information—including information on the universe of FOCI contractors and trends in overall contractor compliance with protective measures—that is needed to determine the effectiveness of the FOCI process and the sufficiency of oversight. Ultimately, without making this determination, DSS cannot adequately ensure it is taking necessary steps to reduce the risk of foreign interests gaining unauthorized access to classified information. Unless our recommendations are implemented, we are concerned that DSS will continue to operate on blind faith that its FOCI process is effective and its oversight is sufficient. DOD did not concur with seven of our recommendations and only partially concurred with the eighth. Regarding our first three recommendations, which aim to improve DSS’s knowledge of the timing of foreign business transactions and reduce the risk of unauthorized foreign access to classified information, DOD argues that having such information will not help protect classified information. However, as we noted in our report, without this information, DSS is not in a position to know when FOCI transactions occur so that timely protective measures can be implemented to mitigate FOCI as needed—the purpose of the FOCI process. Regarding our next three recommendations, which aim to enable DSS to assess the overall effectiveness of its oversight of contractors under FOCI, DOD argues that it does not need to collect and analyze information on the universe of contractors under FOCI and trends in foreign business transactions, or aggregate compliance and counterintelligence information. However, without this information, DSS limits its ability to identify vulnerabilities in the FOCI process and to target areas for improving oversight of contractors, including potential changes to protective measures. DOD also argues that it has three mechanisms to systematically evaluate DSS’s processes: DSS’s Inspector General, a management review process for industrial security field office oversight, and a standards and quality program. However, DOD has not provided evidence in its comments that these mechanisms are focused on systematically reviewing and evaluating the effectiveness of the FOCI process. Regarding our last two recommendations—to formulate a human capital strategy and plan that would better support industrial security representatives in overseeing FOCI contractors—DOD does not believe that its industrial security representatives need additional support. DOD supports this belief with two points. First, DOD states that because less than 3 percent of the approximately 12,000 cleared companies overseen by DSS have any FOCI mitigation, most DSS industrial security representatives do not oversee such contractors. Yet it is unclear how DOD arrived at these figures because DSS does not collect and analyze information on all contactors operating under protective measures. Regardless of the number of these contractors, industrial security representatives must have adequate support—including training and guidance—to verify if contractors are under FOCI and to ensure contractors comply with any protective measures put in place. In the course of our review, we found that industrial security representatives are not sufficiently equipped to fulfill their FOCI responsibilities. Second, DOD noted that DSS is under new leadership and is exploring operational improvements as well as implementing a new industrial security information management system. While it is too early to assess the effect of these proposals, it is also unclear how these efforts will bring about any needed changes to industrial security representatives’ job requirements, guidance, tools, and training. As we concluded in our report, DSS’s dependence on self-reported information from contractors about their foreign activities creates vulnerabilities outside of DSS’s control. Given these vulnerabilities, it is imperative that DSS improve the collection and analysis of key information on the FOCI process and provide its industrial security representatives with the training and tools they need to perform their FOCI responsibilities. If DSS continues to operate without knowing how effective its oversight is and does not support the representatives in carrying out their FOCI responsibilities, then the value of DSS’s management and the FOCI process should be open for further examination. Therefore, we did not modify our recommendations. DOD also provided technical comments, which we addressed. DOD’s letter is reprinted in appendix II, along with our evaluation of its comments. We are sending copies of this report to interested congressional committees; the Secretary of Defense; the Director, Defense Security Service; the Assistant to the President for National Security Affairs; and the Director, Office of Management and Budget. We will make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-4841. Major contributors to this report are Anne-Marie Lasowski, Maria Durant, Ian A. Ferguson, Suzanne Sterling, Kenneth E. Patton, Lily J. Chin, and Karen Sloan. To assess the Defense Security Service’s (DSS) process for determining and overseeing contractors under foreign ownership, control, or influence (FOCI), we reviewed Department of Defense (DOD) regulations and guidance on FOCI protective measures included in the National Industrial Security Program Operating Manual, and the Industrial Security Operating Manual, as well as DSS policies, procedures, and guidance for verifying contractors under FOCI and for overseeing them. We discussed with DSS officials at headquarters and field locations how they use DSS guidance to oversee FOCI contractors. We also discussed DSS roles and responsibilities for headquarters and field staff and challenges in overseeing contractors that report FOCI and the use of FOCI information to evaluate effectiveness of the process. We reviewed DSS training materials to learn about the type of training DSS offers industrial security representatives in meeting their FOCI responsibilities. We also examined FOCI studies conducted by DSS to determine the results of earlier DSS reviews of the FOCI process. We visited nine field offices that varied in how many FOCI contractors they monitored and in their geographic location. Through discussions with DSS officials at headquarters in Alexandria, Virginia, and from nine field offices, we identified FOCI contractors operating under various protective measures and examined DSS actions to verify FOCI and oversee the implementation of protective measures at contractor facilities. We collected information on a nonrepresentative sample of 27 contractor facility case files reviewed by DSS for FOCI. In addition, we visited 8 of the 27 contractor facilities and spoke with security officials, corporate officers, and board members to obtain additional clarification on the types of protective measures and the FOCI process. We spoke with DSS headquarters and field staff regarding actions taken to implement protective measures and reviewed supporting documentation maintained by DSS and contractor facilities. During our visits to nine field offices, we discussed the contents of selected contractor facility file folders to understand how DSS oversees contractors’ implementation of protective measures, determines unmitigated FOCI, and assesses the effectiveness of the FOCI process. Because we did not take a statistical sample of case files, the results of our analyses cannot be generalized. However, we confirmed that the data used to select the files that we reviewed were consistent with the information in the facility files that we reviewed. The following are GAO’s comments on the Department of Defense’s letter dated June 29, 2005. 1. It is unclear how DOD came to the conclusion that our report lacks an understanding of the national policy governing contractors’ access to classified information, given that our description of the policy and process in the background of our report is taken directly from documentation provided by DSS. Further, DOD did not provide in its technical comments any suggested amendments to remove perceived misunderstandings from our report. 2. Cleared U.S. citizens need not break the law for foreign interests to gain unauthorized access to classified information or adversely affect performance of classified contracts. Classified information can be at risk when foreign nationals at a cleared FOCI contractor facility are not identified and timely protective measures are not established to mitigate their influence. 3. DOD’s position that there is little in our report that would enable DSS to improve the FOCI process or justify the cost of implementing our recommendations underscores the department’s failure to grasp the gravity of our findings. DOD has neither systematically evaluated the effectiveness of its FOCI process nor identified opportunities to strengthen its oversight for contractors under FOCI. Our recommendations specifically target correcting these weaknesses. Further, raising concerns about cost without evaluating the effectiveness of its FOCI process is shortsighted. 4. According to the National Industrial Security Program Operating Manual, contractors are required to report material changes to FOCI information previously reported and every 5 years, even if no change occurs. We added a footnote to further clarify the definition of foreign business transactions used in our report. 5. DOD’s response concerning self-reporting underscores the department’s complacency regarding its responsibility to take actions needed to prevent foreign interests from gaining unauthorized access to U.S. classified information. While we recognize that DSS is dependent on self-reporting and that some vulnerabilities are outside of DSS’s control, there are numerous steps DOD could take to mitigate these vulnerabilities. For example, if DSS implemented our recommendation to clarify when reporting should occur and require reporting dates when specific foreign business transactions took place, then DSS could monitor whether contractors are reporting foreign transactions on time and put mitigation measures in place, as appropriate. 6. While DOD maintains that contractors are to report material changes concerning FOCI information as they occur, we found that the National Industrial Security Program Operating Manual does not state this. As we reported, DSS field staff told us that while some contractors report transactions as they occur, some do not report transactions until months later, if at all. Specifying a time frame for contractors could result in more timely reporting of these transactions. 7. As we reported, the FOCI process begins when a contractor reports FOCI information. Having information on when foreign transactions occur would enable DSS to take timely action to impose safeguards or restrictions authorized by the National Industrial Security Program Operating Manual. 8. Unmitigated FOCI at a cleared contractor increases the risk that foreign interests can gain unauthorized access to U.S. classified information. During our review, we found two cases in which contractors appeared to have operated with unmitigated FOCI before protective measures were put in place. Therefore, it is important to know the length of time between when a foreign transaction occurs and when protective measures are put in place to mitigate FOCI. 9. According to the National Industrial Security Program Operating Manual, a contractor under FOCI with an existing facility clearance shall have its clearance suspended or revoked unless protective measures are established to remove the possibility of unauthorized access to classified information or adversely affect performance on classified contracts. DOD’s characterization of DSS having the option to suspend the clearance of contractors with unmitigated FOCI seems to differ from what is stated in the manual. 10. It is unclear why DOD does not see the value in collecting information on contractors operating under all six protective measures, when DSS already centrally collects information on contractors operating under three measures. DSS cannot assess the overall effectiveness of its FOCI process unless it has a complete and accurate account of contractors operating under all types of protective measures. 11. It is unclear how DOD determined that less than 3 percent of its cleared contractors are operating under all six protective measures because DSS does not centrally collect and analyze this information for all six measures. In addition, the most recent information provided to us by DSS indicated that there are about 11,000 contractor facilities participating in the National Industrial Security Program, rather than the 12,000 cited in DOD’s comments. Further, DOD did not provide technical comments to revise the number of contractor facilities stated in our report. 12. Industrial security representatives may use the results of annual meetings, compliance reports, and counterintelligence data to assess an individual contractor’s security posture. However, as stated in our report, DSS does not systematically compile and analyze trends from these oversight activities. Aggregating overall compliance and counterintelligence trends is valuable because it would allow DSS to identify actual or potential weaknesses, evaluate effectiveness, and take actions as needed to improve its FOCI process. 13. Citing how long the program has been in existence misses the point, and DOD does not provide evidence that the needs of representatives are well known. As we reported, industrial security representatives face numerous challenges in carrying out their FOCI responsibilities, which formulates the basis of our recommendation to evaluate the needs of the representatives. Assessing their needs is particularly important given the increasingly complex environment—characterized by international cooperative defense programs and a growing number of cross-border defense industrial relationships—in which industrial security representatives work. 14. As stated in our report, industrial security representatives told us they lacked the training and knowledge they needed to verify complex FOCI cases and oversee contractors under FOCI.
The Department of Defense (DOD) is responsible for ensuring that U.S. contractors safeguard classified information in their possession. DOD delegates this responsibility to its Defense Security Service (DSS), which oversees more than 11,000 contractor facilities that are cleared to access classified information. Some U.S. contractors have foreign connections that may require measures to be put into place to reduce the risk of foreign interests gaining unauthorized access to classified information. In response to a Senate report accompanying the National Defense Authorization Act for Fiscal Year 2004, GAO assessed the extent to which DSS has assurance that its approach provides sufficient oversight of contractors under foreign ownership, control, or influence (FOCI). DSS's oversight of contractors under FOCI depends on contractors self-- reporting foreign business transactions such as foreign acquisitions. As part of its oversight responsibilities, DSS verifies the extent of the foreign relationship, works with the contractor to establish protective measures to insulate foreign interests, and monitors contractor compliance with these measures. In summary, GAO found that DSS cannot ensure that its approach to overseeing contractors under FOCI is sufficient to reduce the risk of foreign interests gaining unauthorized access to U.S. classified information. First, DSS does not systematically ask for, collect, or analyze information on foreign business transactions in a manner that helps it properly oversee contractors entrusted with U.S. classified information. In addition, DSS does not collect and track the extent to which classified information is left in the hands of a contractor under FOCI before measures are taken to reduce the risk of unauthorized foreign access. During our review, we found instances in which contractors did not report foreign business transactions to DSS for several months. We also found a contractor under foreign ownership that appeared to operate for at least 6 months with access to U.S. classified information before a protective measure was implemented to mitigate foreign ownership. Second, DSS does not centrally collect and analyze information to assess its effectiveness and determine what corrective actions are needed to improve oversight of contractors under FOCI. For example, DSS does not know the universe of all contractors operating under protective measures, the degree to which contractors are complying overall with measures, or how its oversight could be strengthened by using information such as counterintelligence data to bolster its measures. Third, DSS field staff face a number of challenges that significantly limit their ability to sufficiently oversee contractors under FOCI. Field staff told us they lack research tools and training to fully understand the significance of corporate structures, legal ownership, and complex financial relationships when foreign entities are involved. Staff turnover and inconsistencies over how guidance is to be implemented also detract from field staff's ability to effectively carry out FOCI responsibilities.
EPA relies heavily on grants to carry out its environmental mission; over one half of its $7.6 billion budget for fiscal year 2000 was provided for grants. Grants are used (1) to financially support continuing environmental programs administered by state and local governments and (2) to fund other environmental projects. During fiscal year 1999, EPA awarded $1.8 billion for continuing environmental programs and $716 million for environmental projects—the subject of this report. Grants are funded by EPA’s headquarters offices, such as the Office of Research and Development and Office of Air and Radiation, and by EPA regional offices. The administration of these grants (from activities prior to the award though the closeout of completed or inactive grants) has been delegated to EPA’s Grants Administration Division, and 10 regional Grants Management Offices. EPA carries out its’ grant programs within the framework of the strategic goals and objectives contained in its strategic plan. The plan sets forth 10 goals with 41 objectives and 123 subobjectives that cover its major programs, such as those for clean air, clean water, and pesticides. For example, EPA’s clean air goal has 4 objectives and 14 subobjectives. One of the four objectives is “Attain National Ambient Air Quality Standards for Ozone and Particulate Matter.” This objective in turn has several subobjectives, including “National Ambient Air Quality Standards for Ozone.” Once potential grantees submit their grant applications, EPA officials review them. If the grant application is approved, the grantee is awarded the grant and funds are made available for the purposes specified in the grant. In connection with the grant award, EPA’s program office officials determine how the grant will support a particular strategic goal, objective, and subobjective. In fiscal year 1999, EPA began coding new grant awards by “program result codes,” which are aligned with goals, objectives, and subobjectives. Before 1999, EPA officials assigned “program element codes” to grant awards, which reflected the program and EPA office awarding the grant. EPA awards grants to organizations and individuals under regulations that establish uniform administrative requirements throughout the agency. The regulations cover a range of grant activities—from those prior to the award through the closeout of completed or inactive grants—and a variety of topics, such as grantee reporting requirements and allowable uses of grant funds. Particular regulations cover grants to institutions of higher education, hospitals, and nonprofit organizations (40 C.F.R. part 30), as well as assistance to state, local, and Indian tribal governments (40 C.F.R part 31). Other EPA regulations cover grants under specific programs, such as Superfund (40 C.F.R. part 35, subpart O), and specific types of assistance, such as fellowships (40 C.F.R. part 46). EPA regulations authorize the agency to deviate from certain regulations on a case-by-case basis. We previously reported that EPA used this deviation authority extensively to close out inactive grants without following certain closeout requirements. EPA awarded about 17,000 project grants totaling $2.8 billion in fiscal years 1996 through 1999. Project grant funds were concentrated in five categories—investigations, surveys or studies; research; Superfund site cleanup support; senior environmental employment program; and training, which accounted for $2.3 billion, or 80 percent of all funds. The grants were also concentrated by the type of recipient: nonprofit organizations, state or local governments, and colleges or universities received approximately 89 percent of the total project grant amount. In fiscal year 1996 through fiscal year 1999, project grants focused on (1) investigations, surveys, or studies; (2) research; (3) Superfund site cleanup support; (4) the senior environmental employment program; and (5) training. The remaining project grants were awarded in 37 other EPA areas, such as the Hardship Grants Program for Rural Communities and the Great Lakes National Program. (See app. I for the number and value of all project grants, fiscal years 1996 through 1999). As shown in figure 1, grants for investigations, surveys, and studies accounted for the single largest category—about 30 percent of all grant dollars awarded. A brief description of these categories follows. EPA awarded $851.8 million in grants for investigations, surveys, or studies for fiscal years 1996 through 1999. These grants were provided for a wide range of activities supporting investigations, surveys, studies, and special purpose assistance in the areas of air and water quality, hazardous waste, toxic substances, and pesticides. These grants are also used for evaluating economic or social consequences related to environmental strategies and for other efforts to support EPA environmental programs. Finally, the grants are used to identify, develop, or demonstrate pollution control techniques or to prevent, reduce, or eliminate pollution. The following examples illustrate the variety of activities funded by these grants: In February 1999, EPA awarded a $10,000 grant to Monitor International, a nonprofit organization located in Annapolis, Maryland, to develop a feasibility study and action plan for a science and education center in Indonesia. In August 1999, EPA awarded a $1.5 million grant to the West Virginia University Research Corporation, National Research Center for Coal and Energy. With the grant funds the center was to provide technical assistance, outreach, a library of databases, maintenance of a Web site, and publications on the design, implementation, and maintenance of alternative wastewater treatment and collection systems for small communities. EPA awarded research project grants totaling $690.9 million. Generally, these grants were to fund laboratory and other research into a variety of environmental problems, such as air pollution and its impact on asthma. For example, EPA awarded a $4.6 million grant to the University of New Orleans in September 1999 for research and development on technical solutions to waste management problems faced by the academic, industrial, and governmental communities. EPA awarded about $408.8 million in grants to states and other government entities and to nonprofit organizations to conduct cleanup activities at specific hazardous waste sites and to implement the requirements of the Superfund program. For example, in September 1999, EPA awarded a $1.5 million grant to the Wisconsin Department of Natural Resources to complete an investigation and study at a waste site in order to select a cleanup remedy for controlling the risks to human health and the environment. The Senior Environmental Employment program, for which EPA makes grants authorized by the Environmental Programs Assistance Act of 1984, accounted for approximately $199.1 million. Under this program, EPA awards cooperative agreements to organizations to enable individuals 55 or older to provide technical assistance to federal, state, or local environmental agencies for pollution prevention, abatement, and control projects. For example, in September 1999, EPA awarded a $1.3 million grant to the National Older Worker Career Center to provide general support to EPA’s staff within the Office of Pesticides Program. EPA awarded $108.3 million in training grants to government, educational, and nonprofit entities, which provide environmental related training in a variety of topics. For example, EPA awarded a $1.5 million grant in July 1999 to North Carolina State University to provide state-of-the-art training courses on the Clean Air Act Amendments. Nonprofit organizations, state or local governments or colleges and universities received most project grant dollars awarded by EPA in fiscal years 1996 through 1999, as table 1 shows. Nonprofit organizations received the largest portion of project grant dollars ($741.8 million, or 33 percent of the total), and the majority of these funds were provided to support investigations, the senior environmental employment program, and research. State or local governments received the next largest amount, with most of these funds provided for Superfund site cleanup support or for investigations. Colleges and universities also received a significant amount of project grant funds, the majority of which was for research. For-profit organizations, individuals, and other government entities, such as water district authorities, also received project grant funds. In October 1998, EPA began designating grant awards to indicate which Results Act goal, objective, and subobjective each grant supported. EPA intended to account for all new obligations by using a program results code (PRC) that aligned with the agency’s strategic goals, objectives, and subobjective. (Previously, EPA accounted for grant funds by using program element codes, which identified the program and EPA office that awarded the grant.) PRCs allows EPA to account for its grant award amounts by goal, objective, and subobjective. EPA project officers assign codes to the grant after deciding which grants to award. Approximately 82 percent of the $1.4 billion in project grants EPA awarded in fiscal years 1999 and 2000 that were assigned a PRC concentrated in 4 of EPA’s 10 goals: clean air, clean and safe water, waste management, and sound science. For 7 of the 100 grants we reviewed, the relationship between the activities funded by the grant and the goal(s), objective(s), and subobjective(s) that EPA identified was not clear. EPA officials explained that for six of these grants the definitions of the goals, objectives, and subobjectives were sufficiently broad to encompass the activities funded by the grants, and agreed that one grant had been designated the incorrect subobjective. The grant award process involves several steps before funds are provided to the grantee. EPA may solicit grant proposals from potential grantees, or grantees may submit unsolicited grant proposals to EPA. In either situation, the grant proposal details the grant’s purpose, amount, and time frame. EPA officials review the grant proposals and frequently discuss them with the submitting entity---a process that may result in modifications to the scope of activities, funding amount, or time period. Once EPA reaches a final decision to fund a grantee, it provides the grantee a commitment letter. In preparing the final grant award document, EPA makes several determinations regarding the authority for the grant activities, the funding authority for the grant, and the PRC code specifying the relevant Results Act goal, objective, and subobjective. The PRC code is entered into EPA’s automated systems to record the obligation of funds under the goals. Because some grants fund a variety of activities, more than one PRC code may be designated for a particular grant. According to EPA officials, the designation of a PRC identifying the goal, objectives, and subobjective to be supported by the grant is part of the grant award. In practice, EPA designates Results Act goal(s), objective(s), and subobjective(s) after the decision has been made to award a particular grant. EPA assigned PRCs to approximately $1.2 billion of the project grants made in fiscal years 1999 and 2000. Most of these funds aligned with the agency goals for waste management ($438.7 million), clean and safe water ($298.1 million), sound science ($146.8 million), and clean air ($119.2 million). Figure 2 shows the distribution of these grant dollars among Results Act goals for fiscal years 1999 and 2000. The remaining $222 million in project grant funds assigned PRC codes were aligned with one of EPA’s six other strategic goals—safe food; preventing pollution and reducing risk in communities, homes, workplaces and ecosystems; reduction of global and cross-border environmental risks; expansion of Americans’ right to know about their environment; a credible deterrent to pollution and greater compliance with the law; and effective management. For 7 of the 100 grants that we reviewed, the funded grant activities did not appear to match the EPA activities defined for the assigned PRC code. More specifically, two of the grants were not clearly related to any EPA goals, objectives, or subobjectives; three grants were clearly related to the indicated goals, but not the objectives and subobjectives; and two grants were related to the indicated goals and objectives, but not the subobjectives. A brief description of these grants follows. In June 1999, EPA awarded a $2.5 million grant to the Brownsville Public Utilities Board in Texas to support specific planning, engineering, environmental, and legal activities related to the development and construction of a dam and reservoir project. The PRC indicated that the grant was to support the Results Act subobjective of working with states and tribes to ensure reporting consistency under the Clean Water Act and Safe Drinking Water Act. In June 1999, EPA awarded a $2 million grant to the University of Missouri to conduct research on the economic, social, biological, physical, and ecological benefits of tree farming. The PRC indicated that the grant was to support the Results Act objective of promoting and implementing sector-based environmental management approaches that achieve superior environmental results at less cost than through conventional approaches. In August 1999, EPA awarded a $20,000 grant to the Urban Land Institute to conduct a conference on smart growth that was coded for Clean and Safe Water goal activities, such as watershed assessment and protection, coastal and marine protection, water quality criteria and standards, or Chesapeake Bay and Gulf of Mexico activities. In January 2000, EPA awarded a $228,000 grant to Michigan State University to examine public opinions regarding the value of wetland ecosystems. The PRC indicated that the grant was to support the Results Act subobjective of cleaning up contaminants that are associated with high-priority human health and environmental problems. In May 2000, EPA awarded a $64,000 grant to Science Services, a nonprofit organization located in Washington, D. C., for hosting an international science and engineering fair for high school students competing for monetary science awards. The PRC indicated that the grant was to support the Results Act goal of supporting research in global climate change. In June 2000, EPA awarded a $8,000 grant to Environmental Learning for Kids, Denver, Colorado to educate culturally diverse families about environmental issues; activities included overnight camping trips, and monthly outdoor workshops. The PRC indicated that the grant was to support the Results Act objective for activities related to providing training to teachers for making presentations to grades K-12. In June 2000, EPA awarded a $5,000 grant to Southwest Youth Corps in Colorado to support the organization and management of the Conservation Corps. The primary purpose of this grant was to train young adults on environmental issues. The PRC indicated that the grant was to support the Results Act objective of providing activities related to training teachers on making presentations to grades K-12. EPA officials explained that the project officer had assigned an incorrect subobjective to the grant EPA awarded to Michigan State University to examine public opinion on the value of wetland ecosystems. EPA believes that the definitions of the goals, objectives, and subobjectives for the other six grants were sufficiently broad to encompass the activities funded by the grants. According to EPA officials, it would be impossible, when defining Results Act goals, objectives, and subobjectives, to list every activity that could apply. However, they stated that it was important to designate the correct PRC for grant activities. EPA approved at least one deviation from its regulations for 25 of the 100 grants we reviewed, and for 15 grants EPA authorized more than one deviation. Most of the deviations were made on a case-by-case basis to waive requirements relating to grant budget periods, matching fund requirements, or other regulations. Individual deviation decision memoranda contained in the grant files documented these decisions. Deviations from regulations for 6 grants, made under EPA’s Science to Achieve Results (STAR) program, were not determined on a case-by-case basis. The STAR fellowship grant program, which is administered by EPA’s Office of Research and Development (ORD), by design provides grants with greater dollar amounts and longer time periods than allowed by EPA’s regulations. According to an EPA official, the STAR program, which began in 1995, is EPA’s largest fellowship program in terms of dollars and number of fellowships. According to ORD officials, the program was designed to be consistent with other federal fellowship programs for scientists. STAR fellowship grants deviate from EPA’s grant regulations governing fellowships in three ways: While the regulations place a limit of $750 on grant funds that can be used to purchase books and supplies, STAR fellowship grants provide up to $5,000 for this purpose. The regulations limit fellowships to 1 year, while STAR fellowships provide up to two years for master degree students and up to 3 years for doctoral students. The regulations stipulate that grant funds may be used for purchasing books and supplies if provided directly to the student; however, STAR fellowship grants funds are used to directly pay the educational institution for these items. EPA does not track the number of deviations it makes. However, regulations require that the authority for each deviation must be documented in the appropriate grant file. The agency awarded 471 STAR fellowship grants in fiscal years 1996 through 1999, totaling $34.1 million in funding. EPA prepared and processed a request for deviation for each of these grants. ORD officials stated that they wanted the STAR fellowship program to parallel a National Science Foundation fellowship program, which authorizes greater funding levels and longer funding periods than allowed by EPA’s regulations. They also stated that they thought providing payments for books and supplies directly to an institution would provide better stewardship and control over the funds and ensure funds were used for authorized purposes. The officials stated that, rather than amending the regulations solely for the STAR program, which it considered time- consuming and a low priority, they opted to use deviations in awarding the grants and currently do not have staff in place to work on amending the regulations. They acknowledged, however, that the regulations are outdated and should be reviewed for possible revision The other deviations we reviewed had been made on a case-by-case basis: Eleven of these deviations involved EPA waiving a requirement that the grant budget date and the project period ending date coincide. For example, in January 1999, EPA amended a grant awarded in March 1997 to the Northeast States for Coordinated Air Use Management to provide an additional $200,000 for research in establishing an ambient air monitoring network for mercury deposition within New England. The project period and the budget period ending dates were changed from March 1999 to March 2001, deviating from EPA’s regulations that require the budget period not exceed 2 years from the award date. EPA approved the deviation, allowing the grantee to expand the number of sampling sites to obtain a better measurement of the pollution problem. EPA made nine deviations that waived the grantee matching funding requirement for the grant. For example, in September 1999, EPA awarded a $4.6 million grant to the University of New Orleans to fund the University Urban Waste Management and Research Center, which provides research and technical assistance to cities with wet weather conditions typical of coastal areas. EPA waived the minimum 5- percent nonfederal matching share requirement for the university. However, this deviation proved unnecessary because the regulation requiring matching funds had been repealed in 1996. Unaware of the change in regulations, EPA officials continued to grant deviations for a matching fund requirement well into fiscal year 2000. Appendix II details the deviations EPA made for the grants we reviewed, aside from those associated with the STAR fellowship program. EPA has extensively used its deviation authority for STAR fellowship grants, citing the time and resources that would be needed to amend its regulations. While amending the grant regulations would entail a time and resource cost in the short-term, EPA’s regulations are intended to provide consistency and transparency for the agency’s grant activities and should reasonably reflect actual practices in the agency’s grant programs. In this case, the regulations do not reflect the actual practice in the STAR fellowship grant program—EPA’s largest fellowship grant program— which routinely awards more money for longer periods of time than is authorized by EPA’s fellowship regulations. Consistency between regulations and practice could be achieved by amending either EPA’s grant regulations or the practices of the STAR fellowship program. To ensure that EPA’s fellowship regulations are consistent with the actual practices, we recommend that the Administrator of EPA direct the Assistant Administrator for Administration and Resources Management to include in future amendments to its fellowship regulations the funding amounts, time periods, and payment methods that will meet the needs of the STAR fellowship grant program. We provided EPA with a draft of this report for review and comment. The agency agreed with the findings in the report and suggested several changes to improve clarity, which we incorporated into the report, where appropriate. EPA agreed with our recommendation to update the fellowship regulation and plans to establish a workgroup to ensure that the regulation reflects the current requirements of the STAR fellowship program. We conducted our review from May 2000 through March 2001 in accordance with generally accepted auditing standards. Our scope and methodology are presented in appendix III. We are sending copies of this report to appropriate congressional committees; interested Members of Congress; the Honorable Christine Todd-Whitman, Administrator, Environmental Protection Agency, and other interested parties. We will also make copies available to others on request. Should you or your staff need further information, please call me at (202) 512-3841. Key contributors to this report were E. Odell Pace, Jill A. Roth, John A. Wanska, and Richard P. Johnson. Appendix II: Listing of Deviations on Other Than STAR Fellowship Grants Allowed deviation Research grantees were allowed to have the budget period of the grants coincide with the project period end date. In some cases, this deviation allowed an extension beyond EPA’s regulatory limits. State and local grantees were not required to provide 5% in non-federal matching funds. Grantees were allowed to incur cost prior to the award of the grants. Grantees were allowed to deviate from numerous requirements. 40 CFR 35.6230(b) and 40 CFR 35.6250(a) 40 CFR 35.6650(b)(2), (3), and (4) Grantee was not required to include a comparison of the (1) percentages Grantee was allowed to change the scope or objective of the project without prior EPA approval. Grantee was not required to submit a list of sites at which it planned to take remedial action. Grantee was not required to submit a non-site specific budget for the support activities funded. of the project completed to the project schedule; (2) estimated funds spent to date to planned expenditures; and (3) comparison of the estimated time and funds needed to complete the work to the time and funds remaining. Grantee was allowed to have the budget period of the grant coincide with the project period. Grantee was not required to submit a quality assurance plan. To determine the activities funded by project grants, we identified EPA project grants and then analyzed automated information, taken from EPA’s Grants Information Control System on grant dollar amounts and grantee type, which we obtained from EPA’s Office of Inspector General. To determine how project grants align with EPA’s Results Act goals and objectives, we identified goals and objectives for all project grants awarded in fiscal years 1999 and 2000 from the automated data. We interviewed EPA headquarters and regional officials, including individual project grant officers, regarding how goals and objectives are identified in EPA’s grant award process. From a universe of 4,717 grants awarded in fiscal years 1999 and 2000,we selected a random sample of 100 grants . We reviewed supporting documentation for these grants and interviewed cognizant EPA officials to assess whether the funded activities were consistent with the activities for the goal(s) and objective(s) that EPA identified as being supported by the grant. To determine the extent EPA used its authority to deviate from regulations, we reviewed the same 100 randomly selected grants. In cases where deviations occurred, we obtained additional information regarding the reasons for the deviation. We interviewed EPA officials to determine the circumstances and frequency for using deviations in general and for the specific grants we selected.
This report provides information on the Environmental Protection Agency's (EPA) management and oversight of project grants. Specifically, GAO examines (1) the dollar amounts of project grants EPA awarded in fiscal years 1996 through 1999 and the program activities they funded, by grantee type; (2) how the activities funded by the project grants align with the Government Performance and Results Act goals and objectives identified by EPA; and (3) the extent to which EPA uses its authority to deviate from relevant regulations in awarding grants. GAO found that EPA awarded about 17,000 project grants totaling more than $2.8 billion in fiscal years 1996 through 1999. Five categories accounted for nearly 80 percent of all project grant funds (1) general investigations, surveys or studies involving air and water quality; (2) research; (3) studies and cleanups of specific hazardous waste sites; (4) nonprofit organizations; and (5) training activities. EPA identified about 82 percent of the $1.4 billion in project grants awarded in fiscal years 1999 and 2000 as supporting four strategic goals under the Results Act. GAO found this to be the case in 93 of 100 grants reviewed. EPA used its authority to deviate from regulations in awarding 25 of the 100 grants GAO reviewed.
VA has two basic cash disability benefits programs. The compensation program pays monthly benefits to eligible veterans who have service- connected disabilities (injuries or diseases incurred or aggravated while on active military duty). The payment amount is based on the veteran’s degree of disability, regardless of employment status or level of earnings. By contrast, the pension program assists permanently and totally disabled wartime veterans under age 65 who have low incomes and whose disabilities are not service-connected. The payment amount is determined on the basis of financial need. VBA and the Board process and decide veterans’ disability claims and appeals on behalf of the Secretary. The claims process starts when veterans submit claims to one of VBA’s 57 regional offices. (See app. I for the overall flow of claims and appeals processing.) By law, regional offices must assist veterans in supporting their claims. For example, for a compensation claim, the regional office obtains records such as the veteran’s existing service medical records, records of relevant medical treatment or examinations provided at VA health-care facilities, and other relevant records held by a federal department or agency. If necessary, the regional office arranges a medical examination for the claimant or obtains a medical opinion about the claim. The regional office adjudicator then must analyze the evidence for each claimed impairment (veterans claim an average of about five impairments per claim); determine whether each claimed impairment is service-connected (VA grants service-connection for an average of about three impairments per claim); apply VA’s Rating Schedule which provides medical criteria for rating the degree to which each service-connected impairment is disabling (disability ratings can range from zero to 100 percent, in 10-percent increments); determine the overall disability rating that results from the combination of service-connected impairments suffered by the veteran; and notify the veteran of the decision. If a veteran disagrees with the regional office’s decision, he or she begins the appeals process by submitting a written Notice of Disagreement to the regional office. During fiscal years 1999-2000, the regional offices annually made an average of about 616,000 decisions involving disability ratings, and veterans submitted Notices of Disagreement in about 9 percent of these decisions. Veterans can disagree with decisions for reasons other than the outright denial of benefits that occurs, for example, in a compensation case when a regional office decides an impairment claimed by a veteran is not service-connected. The veteran also may believe the severity rating assigned to a service-connected impairment is too low and ask for an increase in the rating. In response to a Notice of Disagreement, the regional office provides a further written explanation of the decision, and if the veteran still disagrees, the veteran may appeal to the Board. During fiscal years 1999- 2000, about 48 percent of the veterans who filed Notices of Disagreement in decisions involving disability ratings went on to file appeals with the Board. In fiscal year 2001, VBA began nationwide implementation of the Decision Review Officer position in its regional offices. Now, before appealing to the Board, a veteran may ask for a review by a Decision Review Officer, who is authorized to grant the contested benefits based on the same case record that the regional office relied on to make the initial decision. VBA believes this process will result in fewer appeals being filed with the Board. Located in Washington, D.C., the Board is an administrative body whose members are attorneys experienced in veterans’ law and in reviewing benefits claims. The Board’s members are divided into four decision teams, with each team having up to 15 Board members and 61 staff attorneys. Each team has primary responsibility for reviewing the appeals that originate in an assigned group of regional offices. Board members’ decisions must be based on the law, regulations, precedent decisions of the courts, and precedent opinions of VA’s General Counsel. During the Board’s appeals process, the veteran or the veteran’s representative may submit new evidence and request a hearing. During fiscal years 1999 and 2000, for all VA programs, the Board annually decided an average of about 35,700 appeals, of which about 32,900 (92 percent) were disability compensation cases. The average appealed compensation case contains three contested issues. As a result, in some cases, the Board member may grant the requested benefits for some issues but deny the requested benefits for others. During fiscal years 1999 and 2000, the Board in its initial decisions on appealed compensation cases granted at least one of the requested benefits in about 24 percent of the cases. In some instances, the Board member finds a case is not ready for a final decision and returns (or remands) the case to the regional office to obtain additional evidence and reconsider the veteran’s claim. During fiscal years 1999 and 2000, respectively, the Board in its initial decisions on appealed compensation cases remanded 38 percent and 34 percent of the cases. After obtaining additional evidence for remanded cases, if the regional office still denies the requested benefits, it resubmits the case to the Board for a final decision. If the Board denies benefits or grants less than the maximum benefit available under the law, veterans may appeal to the U. S. Court of Appeals for Veterans Claims. The court is not part of VA and not connected to the Board. During fiscal years 1999 and 2000, veterans filed appeals with the court in an estimated 10 percent of the Board’s decisions. Unlike the Board, the court does not receive new evidence, but considers the Board’s decision, briefs submitted by the veteran and VA, oral arguments, if any, and the case record that VA considered and that the Board had available. The court may dismiss an appeal on procedural grounds such as lack of jurisdiction, but in the cases decided on merit, the court may affirm the Board’s decision (deny benefits), reverse the decision (grant benefits), or remand the decision back to the Board for rework. During fiscal years 1999 and 2000, the court annually decided on merit an average of about 1,800 appealed Board decisions, and in about 67 percent of these cases, the court remanded or reversed the Board’s decisions in part or in whole.Under certain circumstances, a veteran who disagrees with a decision of the court may appeal to the U.S. Court of Appeals for the Federal Circuit and then to the Supreme Court of the United States. In fiscal year 1998, the Board established the first quantitative quality assurance program to evaluate and score the accuracy of its decisions and to collect data to identify areas where the quality of decision-making needs improvement. The accuracy measure used by the Board understates its true accuracy rate because the Board’s accuracy rate calculations include certain deficiencies that would not result in either a reversal or a remand by the court. Even so, the Board’s quality assurance program does not capture certain data that potentially could help improve the quality of the Board’s decisions. Such data include information identifying the specific medical issues involved in cases where a disability decision was judged as being in error. Having such data could enhance the Board’s ability to target training for its decision makers. On the basis of the results of the quality assurance program it established in fiscal year 1998, the Board estimated that 89 percent of its decisions were accurate (or “deficiency-free”). Using these results as a baseline, VA established performance accuracy goals for the Board. One of the Board’s strategic performance goals is to make deficiency-free decisions 95 percent of the time. To calculate its estimated overall accuracy rate, the Board does quality reviews of selected Board decisions. We reviewed the Board’s methods for selecting random samples and calculating accuracy rates and concluded that the number of decisions reviewed by the Board was sufficient to meet the Board’s goal for statistical precision in estimating its accuracy rate. However, we brought to the Board’s attention some issues that caused the Board to fall short of proper random sampling and accuracy rate calculation methods, such as not ensuring that decisions made near the end of the fiscal year are sampled or that the results from quality reviews are properly weighted in the accuracy rate calculation formula. We do not believe the overall accuracy rate reported by the Board for fiscal year 2001 would have been materially different if these methodological issues had been corrected earlier; however, if not corrected, these issues potentially could lead to misleading accuracy rate calculations in the future. The Board agreed in principle to correct these issues. As of June 2002, the Board had not yet instituted corrective actions. According to VA’s performance reports, the Board has come close but has not achieved its annual interim goals for accuracy (see table 1). However, in calculating its reported accuracy rates, the Board includes deficiencies that are not “substantive”—that is, they would not be expected to result in either a remand by the court or a reversal by the court. Consequently, the reported accuracy rates understate the Board’s level of accuracy that would result if only substantive deficiencies were counted in the calculation. Under its quality assurance program, the Board’s quality reviewers assess the accuracy of selected decisions on the basis of six critical areas (see table 2). One error (or deficiency) in any of these six areas means that a decision fails the quality test. However, according to the Board, all six areas would include certain deficiencies that are not substantive. In particular, according to the Board, most deficiencies in the “format” category are not substantive. In fiscal year 2001, the format category accounted for about 38 percent of all recorded deficiencies. At our request, the Board recalculated its accuracy rate for fiscal year 2001, excluding format deficiencies, and the resulting accuracy rate was 92 percent, as compared with the reported accuracy rate of 87 percent. Excluding all other nonsubstantive deficiencies presumably would have resulted in an even higher accuracy rate. In contrast with the Board, beginning in fiscal year 2002, VBA no longer includes nonsubstantive deficiencies in its accuracy rate calculations; however, it continues to monitor them. VBA took this action based on a recommendation by the 2001 VA Claims Processing Task Force, which said that mixing serious errors with less significant deficiencies can obscure what is of real concern. The Board’s quality review program subdivides the six critical areas shown in table 2 into 31 subcategories. For example, if a quality reviewer classifies an error as stemming from “reasons and bases,” the reviewer must then indicate whether the error was due to misapplying legal authority, failing to apply appropriate legal authority, using an incorrect standard of proof, or providing an inadequate explanation for the decision. This information is recorded in the Board’s quality review database, providing the Board with data that can be analyzed to identify training needed to improve quality. However, the Board does not record in its quality review database any information on the specific issue that prompted the appeal (such as whether a disability is service-connected) or the specific medical impairment to which an error is related. For example, a quality reviewer might find an error in a Board decision for an appeal that involved four separate medical impairments—two for which the veteran had requested service connection and two others for which he had requested a disability rating increase. On the basis of information that the quality review database currently captures, however, the Board could not determine which of the four impairments the error was related to, nor could the Board determine whether the error was related to a request for service- connection or an increased disability rating. This is not the case, however, for Board decisions remanded by the Court of Appeals for Veterans Claims. For these cases, the Board maintains a separate database with information on the reasons that the court remands decisions back to the Board for rework. For each issue that the court remands in a compensation case, the Board records in the database such information as: (1) whether the issue involved a request for service- connection or an increased rating, (2) the diagnostic code of the impairment involved in each issue, and (3) the reason for the remand. According to Board officials, being able to analyze the court’s reasons for remands by type of decisional issue and type of impairment enhances the Board’s ability to reduce remands from the court through appropriate training. VBA and the Board recognize that in some cases, different adjudicators reviewing the same evidence can make differing judgments on the meaning of the evidence, without either decision necessarily being wrong. In such cases, VBA and the Board instruct quality reviewers not to record an error. A hypothetical case provided by the Board furnishes an example. In this case, a veteran files a claim in 1999 asserting he suffered a back injury during military service but did not seek medical treatment at that time. One of the veteran’s statements says he injured his back during service in 1951, but another says he injured his back in 1953. An adjudicator may find that this discrepancy in dates adversely affects the claimant’s credibility about whether an injury actually occurred in service, but the quality reviewer may consider the discrepancy to be insignificant. Where such judgments are involved, the Board’s and VBA’s quality review programs recognize that variations in judgment are to be expected and are acceptable as long the degree of variation is within reason. (App. II provides other examples of difficult judgments that could result in decision-making variations and explains VA’s “benefit-of-the-doubt” rule.) The Board and VBA, however, differ in their approaches to collecting information about cases where this type of variation occurs. In such instances, the Board’s quality reviewers note why they believe an alternative decision could have been made and send the explanation to the deciding Board member. However, they do not enter any of this information in the quality review database. In contrast, VBA recently instructed its quality reviewers to enter such information in the VBA quality review database, even though no error is recorded in the database. VBA believes that by identifying and analyzing cases in which quality reviewers believed the adjudicator’s judgment was pushing against the boundary of reasonableness, it potentially can identify opportunities to improve the quality of decision making by improving training. Even though evidence suggests decision making across regional office and Board adjudicators may not be consistent, VA does not systematically assess decision making consistency to determine the degree of variation that occurs for specific impairments and to provide a basis for identifying steps that could be taken, if considered necessary, to reduce such variation. In its 2003 performance plan, VA acknowledged that veterans are concerned about the consistency of disability claims decisions across the 57 regional offices. In a nationwide comparison, VBA projected in its fiscal year 2001 Annual Benefits Report that the average compensation payments per disabled veteran in fiscal year 2002 would range from a low of $5,783 in one state to a high of $9,444 in another state. According to a VBA official, this disparity in average payments per veteran might be due in part to demographic factors such as differences in the average age of veterans in each state. However, this disparity in average payments per veteran also raises the possibility that when veterans in the same age group submit claims for similar medical conditions, the regional office in one state may tend to give lower disability ratings than the regional office in another state. Indeed, in 1997, the National Academy of Public Administration reviewed disability claims processing and said VA needed to identify the degree of decision-making variation expected for specific medical issues, set consistency standards, and measure the level of consistency as part of the quality review process or through testing of control cases in multiple regional offices. Furthermore, in 2001, VA’s Claims Processing Task Force said there was an apparent lack of uniformity among regional offices in interpreting and complying with directives from VA headquarters and that VA’s regulations and the procedures manual for regional offices were in dire need of updating. The task force concluded that there was no reasonable assurance that claims decisions would be made as uniformly and fairly as possible to the benefit of the veteran. Even though such concerns and issues exist, VA does not systematically assess the decision- making consistency of regional office adjudicators. Similarly, VA does not assess consistency between decisions made by regional offices and the Board even though evidence suggests this issue may warrant VA’s attention. Because veterans may submit new evidence during the appeals process, one might assume that the Board generally grants benefits denied by regional offices due to the impact of such new evidence. However, an analysis in 1997 of about 50 decisions in which the Board had granted benefits previously denied by regional offices yielded a different viewpoint. Staff from both VBA and the Board reviewed these cases and concluded that most of these Board decisions to grant benefits had been based on the same evidence that the regional offices had considered in reaching their decisions to deny benefits. The reviewers characterized the reason for the Board members’ decisions to grant benefits as a difference of opinion between the Board members and regional office adjudicators in the weighing of evidence. Furthermore, even in remanded compensation cases for which regional offices have obtained new evidence in accordance with the Board’s remand instructions and then again denied the benefits, the Board generally has granted benefits in about 26 percent of these cases after they have been resubmitted for a final decision. This seems to indicate that, in these particular cases, Board members in some way differed with regional office adjudicators on the impact of the new evidence obtained by the regional offices before resubmitting the remanded cases to the Board. Available evidence also provides indications that the issue of variations in decision making among the Board members themselves may warrant VA’s attention in studies of consistency. Historically, there have been variances in the rates at which the Board’s four decision teams have remanded decisions to regional offices for rework. No systematic study has been done to explain the variances in remand rates. Board officials said that it is their perception that the remand rates vary among the Board’s decision teams because the quality of claims processing varies among the regional offices for which each team is responsible. Similar concerns about consistency of claims adjudication in the Social Security Administration (SSA) have prompted SSA to begin taking steps to assess consistency issues in its disability program. As we reported in 1997, SSA’s primary effort to improve consistency has focused on decision- making variations between its initial and appellate levels. To gather data on variations between these two levels, SSA instituted a system in 1993 under which it selects random samples of final decisions made by administrative law judges and reviews the entire decisional history of each case at both the initial and appellate levels. The reviewers examine adjudicative and procedural issues to address broad program issues such as whether a claim could have been allowed earlier in the process. Data captured through this system have provided a basis for taking steps to clarify decision-making instructions and provide training designed to improve consistency between the initial and appellate levels. However, no systematic evaluations have been done to determine the effectiveness of these actions. In its January 2001 disability management plan, SSA said that it needed to take further steps to promote uniform and consistent disability decisions across all geographic and adjudicative levels. Opportunities exist to improve the quality of the Board’s reporting of accuracy and decision making. The Board includes nonsubstantive deficiencies in its accuracy rate calculation. By doing so, the Board may be obscuring what is of real concern. In addition, the Board’s quality assurance database does not capture data on specific medical disability issues related to the reasons for errors found in Board decisions. Also, in contrast with VBA, the Board’s quality assurance program does not collect information on cases in which quality reviewers do not charge errors but have differences of opinion with judgments made by Board members. We believe that analysis of such data could lead to improvements in quality through improved training or by clarifying regulations, procedures, and policies. Furthermore, because variations in decision making are to be expected due to the difficult judgments that adjudicators often must make, one must ask the questions: For a given medical condition, how much variation in decision making exists and does the degree of variation suggest that VA should take steps to reduce the level of variation? VA, however, does not assess variation in decision making. None of the quality review efforts of either VBA or the Board are designed to systematically assess the degree to which veterans with similar medical conditions and circumstances may be receiving different decisional outcomes or to help identify steps that could reduce such variation if necessary. Without ongoing systematic assessments of consistency across the continuum of decision making, VA cannot adequately assure veterans that they can reasonably expect to receive consistent treatment of their claims across all decision-making levels in VA. We recognize that our recommendations will have to be implemented within the context of VA’s current major efforts to reduce a large and persistent backlog of disability claims and appeals and to reduce the average processing time. Nevertheless, we believe it is critical that VA take the necessary steps to support improvements in training and in regulations, procedures, or policies that could enhance the quality of disability decision making across the continuum of adjudication and to help provide adequate assurance to veterans that they will receive consistent and fair decisions as early as possible in the process. Indeed, maintaining and improving quality should be of paramount concern while implementing a major effort to reduce backlogs and processing time. Accordingly, we recommend that the Secretary of VA direct the Chairman of the Board of Veterans’ Appeals to: Revise the quality assurance program so that, similar to VBA, the calculation of accuracy rates will take into account only those deficiencies that would be expected to result in a reversal of a Board decision by the U.S. Court of Appeals for Veterans Claims or result in a remand by the court. Revise the Board’s quality assurance program to record information in the quality review database that would enable the Board to systematically analyze case-specific medical disability issues related to specific errors found in Board decisions in the same way that the Board is able to analyze the reasons that the court remands Board decisions. Monitor the experience of VBA’s quality assurance program in collecting and analyzing data on cases in which VBA’s quality reviewers do not record errors but have differences of opinion with regional office adjudicators in the judgments made to reach a decision. If VBA finds that the analysis of such data helps identify training that can improve the quality of decision making, the Board should test such a process in its quality assurance program to assess whether it would enable the Board to identify training that could improve the quality of Board decisions. We also recommend that the Secretary direct the Under Secretary for Benefits and the Chairman of the Board of Veterans’ Appeals to jointly establish a system to regularly assess and measure the degree of consistency across all levels of VA adjudication for specific medical conditions that require adjudicators to make difficult judgments. For example, VA could develop sets of hypothetical claims for specific medical issues, distribute such hypothetical claims to multiple adjudicators at all decision-making levels, and analyze variations in outcomes for each medical issue. Such a system should provide data to determine the degree of variation in decision making and provide a basis to identify ways, if considered necessary, to reduce such variation through training or clarifying and strengthening regulations, procedures, and policies. Such a system should also assess the effectiveness of actions taken to reduce variation. If departmental consistency reviews reveal any systematic differences among VA decision makers in the application of disability law, regulations, or court decisions, the Secretary should, to the extent that policy clarifications by VBA cannot resolve such differences, direct VA’s General Counsel to resolve these differences through precedent legal opinions if possible. We received written comments on a draft of this report from VA (see app. III). In its comments, VA concurred fully or in principle with our recommendations. With regard to our first recommendation, VA said that the Board intends to revise its quality review system to count only substantive errors for computational and benchmarking purposes but will continue to track all errors. On the basis of VA’s comments, we also modified the report to accurately reflect the standard of review employed by the U.S. Court of Appeals for Veterans Claims in reviewing Board decisions. With regard to our second recommendation, VA said that it would use its Veterans Appeals Control Locator System to gather information on case-specific medical disability issues related to specific errors found in Board decisions. VA questioned our basis for concluding that tracking such information will yield useful data for improving the adjudication system. As stated in the draft report, we based our recommendation on the fact that the Board has already concluded that such information is beneficial for analyzing the reasons for remands from the Court of Appeals for Veterans Claims. With regard to our third recommendation, VA said representatives of the Board and VBA will meet so that a system may be established for the Board to access and review VBA’s methodology for assessing, reporting, and evaluating instances of “difference of opinion” between the quality reviewer and the decision maker. In its comments, VA concurred in principle with our fourth recommendation. VA agreed that consistency is an important goal and acknowledged that it has work to do to achieve it. However, VA was silent on how it would measure consistency for specific medical conditions that require adjudicators to make difficult judgments. Instead, VA described the kinds of actions underway that it believes will generally reduce inconsistency. While we support these efforts, we maintain that without a way to evaluate and measure consistency, VA will be unable to determine the extent to which such efforts actually improve consistency of decision- making across all levels of VA adjudication now and over time. Neither will VA have information needed to identify ways to reduce decision- making variations for specific medical conditions, if considered necessary. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of the Department of Veterans Affairs, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have questions about this report, please call me on (202) 512-7101 or Irene Chu on (202) 512-7102. Other key contributors were Ira Spears, Steve Morris, Patrick diBattista, and Mark Ramage. 57 Regional Offices Decide Claims and Notify Veterans of Decisions (estimated disposition of 100,000 compensation claims filed with regional offices) 1. Veterans either agree with regional offices decisions 2. Veterans submit Notices of Disagreement to regional or take no further action in 90,880 cases. offices in 9,120 cases. In 3,657 of these cases, veterans go on to file appeals with the Board. Board of Veterans' Appeals Board Members Review Regional Office Decisions Appealed by Veterans (estimated disposition of 3,657 compensation cases appealed to Board) 3. Board remands 1,311 cases to regional 4. Board grants at least one requested offices to develop further evidence and reconsider their decisions (Board remands 211 of these cases twice). benefit in 1,153 cases (Board makes 269 of these grants after regional offices resubmit remanded cases). 5. Board denies all benefits in 1,956 cases (Board makes 494 of these denials after regional offices resubmit remands). 6. Regional offices obtain more evidence 7. Regional offices obtain 9. Veterans appeal 307 but deny requested benefits in 839 cases and resubmit these cases to the Board for a final decision (of the 211 remanded twice, regional offices deny benefits in 135 and resubmit them to Board). more evidence and grant requested benefits in 339 cases (47 of these 339 grants occur after the second remand). 8. Veterans withdraw or regional offices close 209 cases (28 of these 209 withdrawals or closures occur after second remand). cases to U.S. Court of Appeals for Veterans Claims. U.S. Court of Appeals for Veterans Claims Court Reviews Board Decisions Appealed by Veterans (estimated disposition of 307 compensation cases appealed to the court) 10. Court dismisses 74 cases on 11. Court affirms Board decisions in whole 12. In whole or in part, Court reverses Board procedural grounds. in 77 cases (all requested benefits denied). decisions (grants requested benefits) or remands Board decisions in 156 cases. U.S. Court of Appeals for the Federal Circuit. the cases remanded by the court. The estimated disposition by VA’s regional offices of the 100,000 claims (in boxes 1 and 2) is based on data for claims involving disability ratings for fiscal years 1997 to 2000. During those years, veterans submitted Notices of Disagreement in about 9 percent of the regional office decisions and went on to file appeals with the Board in about 40 percent of the cases in which they had submitted such notices. On the basis of Board data for fiscal years 1999 and 2000, in its initial decisions on appealed compensation cases, the Board: (1) granted at least one of the requested benefits in about 24 percent of the cases, (2) denied all requested benefits in about 40 percent of the cases, and (3) remanded about 36 percent of the cases to regional offices for rework. After obtaining the additional evidence required by the Board for remanded cases, the regional offices granted requested benefits in about 22 percent of the remanded cases and denied requested benefits in 64 percent of the cases. After regional offices resubmitted denied cases to the Board for a final decision, the Board granted at least one of the requested benefits in about 26 percent of the cases, denied all benefits in about 49 percent, and remanded about 25 percent once again to regional offices for further rework. For this illustration, we assumed that the Board did not remand a case more than two times. The estimate of 307 cases appealed to the U.S. Court of Appeals for Veterans Claims (in box 9), the court’s estimated disposition of these 307 cases (in boxes 10, 11, 12), and the estimated number of decisions appealed to the U.S. Court of Appeals for the Federal Circuit (in box 13) are based on fiscal years 1999 and 2000 data from the court’s annual reports. Appendix II: Board of Veterans’ Appeals Illustrations of Difficult Judgments Resulting in Decision-Making Variations Examples of difficult judgments To be granted benefits for post-traumatic stress disorder, a veteran’s claim must have credible evidence that a stressor occurred during military service. Assume the record shows a claimant served in Vietnam as a supply specialist, and he identified mortar attacks as a stressor. Reports prepared by his military unit in Vietnam indicate a single enemy mortar attack occurred where the claimant was stationed. The claimant’s testimony was vague about the number and the time of the attacks. One adjudicator may rely on the unit’s reports and conclude the claimant engaged in combat and is entitled to have his lay statements accepted without further corroboration as satisfactory evidence of the in-service stressor. Another adjudicator may conclude that the claimant is not credible as to exposure to enemy fire and require other credible supporting evidence that the in-service stressor actually occurred. Assume an appeal for either service connection or a higher disability rating has two conflicting medical opinions, one provided by a medical specialist who reviewed the claim file but did not actually examine the veteran and a second opinion provided by a medical generalist who reviewed the file and examined the veteran. One adjudicator could assign more weight to the specialist’s opinion, while another could find the generalist’s opinion to be more persuasive. Thus, depending on which medical opinion is given more weight, one adjudicator could grant the claim and the other deny it. Yet, a third adjudicator could find both opinions to be equally probative and conclude that VA’s “benefit-of-the-doubt” rule requires that he decide in favor of the veteran’s request for either service-connection or a higher disability rating. Under the benefit-of-the-doubt rule, if an adjudicator concludes that there is an approximate balance between the evidence for and the evidence against a veteran’s claim, the adjudicator must decide in favor of the veteran. The Rating Schedule does not provide objective criteria for rating the degree to which certain spinal impairments limit a claimant’s motion. The adjudicator must assess the evidence and draw a conclusion as to whether the limitation of motion falls into one of three severity categories: “slight, moderate, or severe.” Similarly, in assessing the severity of incomplete paralysis, the adjudicator must draw a conclusion as to whether the veteran’s incomplete paralysis falls into one of three severity categories: “mild, moderate, or severe.” In each case, each severity category in itself encompasses a range of severity, and the judgment as to whether a claimant’s condition is severe enough to cross over from one severity range into the next could vary in the minds of different adjudicators. The Rating Schedule provides a formula for rating the severity of a veteran’s occupational and social impairment due to a variety of mental disorders. However, the formula actually is a nonquantitative, behaviorally oriented framework for guiding adjudicators in making judgments and drawing conclusions as to which of the following characterizations best describes the degree to which a claimant is occupationally and socially impaired: (1) totally impaired; (2) deficient in most areas such as work, school, family relations, judgment, thinking, or mood; (3) reduced reliability and productivity; (4) occasional decrease in work efficiency and intermittent periods of inability to perform occupational tasks; (5) mild or transient symptoms that decrease work efficiency and ability to perform occupational tasks only during periods of significant stress or symptoms can be controlled by continuous medication, and (6) not severe enough to interfere with occupational or social functioning or to require continuous medication.
For fiscal year 2002, the Department of Veterans Affairs (VA) will pay $25 billion in cash disability benefits to 3.3 million disabled veterans and their families. Veterans who are dissatisfied with VA's 57 regional offices' decisions may file appeals with VA's Board of Veteran's Appeals. In about half of such appeals, the Board has either granted the benefits denied or returned the cases to regional offices for rework. Additionally, VA reported an accuracy rate of less than 70 percent for regional office disability decisions when it tested a new quality assurance program in fiscal year 1998. When the Board itself denies benefits, veterans may appeal to the U.S. Court of Appeals for Veterans Claims. In over half of these appeals, the Court has either granted the benefits denied by the Board or returned the decisions to the Board for rework. In fiscal year 1998, the Board of Veteran's Appeals established a quantitative evaluation program to score its decisionmaking accuracy and collect data to improve decisionmaking. The accuracy measure used by the Board understates its true accuracy rate because the calculations include certain deficiencies, such as errors in a written decision's format, which would not result in either a reversal or a remand by the Court. VA does not assess the consistency of decisionmaking across the regional office and Board disability adjudicators even though VA acknowledges that in many cases adjudicators of equal competence could review the same evidence but render different decisions. Although available evidence indicates that variations in decisionmaking occur across all levels of VA adjudication, VA does not conduct systematic assessments to determine the degree of variations that occurs for specific impairments and to provide a basis for determining ways to reduce such variations.
Information security is a critical consideration for any organization that depends on information systems and computer networks to carry out its mission or business. It is especially important for government agencies, where maintaining the public’s trust is essential. The dramatic expansion in computer interconnectivity and the rapid increase in the use of the Internet have revolutionized the way our government, our nation, and much of the world communicates and conducts business. Although this expansion has created many benefits for agencies such as FHFA in achieving their missions and providing information to the public, it also exposes federal networks and systems to various threats. Without proper safeguards, computer systems are vulnerable to individuals and groups with malicious intent who can intrude and use their access to obtain sensitive information, commit fraud, disrupt operations, or launch attacks against other computer systems and networks. The risks to these systems are well-founded for a number of reasons, including the dramatic increase in reports of security incidents, the ease of obtaining and using hacking tools, and steady advances in the sophistication and effectiveness of attack technology. The Federal Bureau of Investigation has identified multiple sources of threats, including foreign nation states engaged in intelligence gathering and information warfare, domestic criminals, hackers, virus writers, and disgruntled employees or contractors working within an organization. In addition, the U.S. Secret Service and the CERT® Coordination Center studied insider threats in the government sector and stated in a January 2008 report that “government sector insiders have the potential to pose a substantial threat by virtue of their knowledge of, and access to, employer systems and/or databases.” Our previous reports, and those by federal Inspectors General, describe persistent information security weaknesses that place federal agencies at risk of disruption, fraud, or inappropriate disclosure of sensitive information. Accordingly, we have designated information security as a governmentwide high-risk area since 1997, most recently in 2009. Recognizing the importance of securing federal agencies’ information systems, Congress enacted the Federal Information Security Management Act (FISMA) in December 2002 to strengthen the security of information and systems within federal agencies. FISMA requires each agency to develop, document, and implement an agencywide information security program for the information and information systems that support the operations and assets of the agency, using a risk-based approach to information security management. Such a program includes assessing risk; developing and implementing cost-effective security plans, policies, and procedures; providing specialized training; testing and evaluating the effectiveness of controls; planning, implementing, evaluating, and documenting remedial actions to address information security deficiencies; and ensuring continuity of operations. The Housing and Economic Recovery Act of 2008 created the FHFA, an independent federal regulatory agency resulting from the statutory merger of the Federal Housing Finance Board (FHFB) and the Office of Federal Housing Enterprise Oversight (OFHEO). FHFA absorbed the powers and regulatory authority of both entities, with expanded legal and regulatory authority. The act also gave FHFA the responsibility for, among other things, the supervision and oversight of Fannie Mae, Freddie Mac, and the 12 federal home loan banks. Specifically, the agency was assigned responsibility for ensuring that each of the regulated entities operates in a fiscally safe and sound manner, including maintenance of adequate capital and internal controls, and carries out its housing and community development finance mission. FHFA is a small government agency with a workforce that includes economists, market analysts, examiners, subject matter experts, technology specialists, accountants, and attorneys. FHFA had a staff of about 430 employees at the end of fiscal year 2009. During fiscal year 2009, OFHEO’s and FHFB’s personnel, property, and program activities, and certain employees and activities of the Department of Housing and Urban Development (HUD), were transferred to FHFA. The assets, liabilities, and financial transactions of OFHEO and FHFB were also consolidated into FHFA. To support these activities, FHFA began unifying the agency’s information technology (IT) infrastructure operations, including integrating its general support systems, and has made substantial progress. This effort included implementing an integrated e-mail messaging system, consolidating software licenses and services, eliminating duplication of information systems and sources, and unifying internal customer service operations. FHFA also unified its financial systems. FHFA uses the National Finance Center, a service provider within the Department of Agriculture, for its payroll and personnel processing. During fiscal year 2009, the agency coordinated programming and systems changes with the National Finance Center to achieve a transition from two separate systems into a unified payroll and processing system for the agency with integration completed in July 2009. FHFA had been using legacy financial management systems and processes from OFHEO and FHFB. In fiscal year 2009, FHFA completed outsourcing of its financial management services to the Treasury Department’s Bureau of the Public Debt (BPD) Administrative Resource Center and a new financial management system (FMS), which became operational in July 2009. FMS provides the agency with an integrated system for its accounting, procurement, and travel activities. The system uses Oracle Corporation’s hosting service in Austin, Texas. As the commercial hosting facility for the Administrative Resource Center’s financial management services, Oracle staff serve as database and systems administrators and provide backup and recovery services for FHFA’s financial information. A basic management objective for any organization is to protect the resources that support its critical operations from unauthorized access. Organizations accomplish this objective by designing and implementing controls that are intended to prevent, limit, and detect unauthorized access to computing resources, programs, information, and facilities. Such controls include both logical access and physical access controls. Logical access controls include requiring users to authenticate themselves and limiting the files and other resources that authenticated users can access and the actions that these users can execute. Physical access controls involve restricting physical access to computer resources and protecting these resources from intentional or unintentional loss or impairment. Without adequate access controls, unauthorized individuals, including external intruders and former employees, can surreptitiously read and copy sensitive information and make undetected changes or deletions for malicious purposes or personal gain. In addition, authorized users can intentionally or unintentionally read, add, delete, modify, or execute changes that are outside their span of authority. FHFA has multiple deficiencies in the access controls intended to restrict logical and physical access to the agency’s information and systems. A major reason for these control deficiencies was that FHFA did not fully implement key activities of its information security program. If left uncorrected, the deficiencies increase the risk that unauthorized individuals may gain access to FHFA computing resources, programs, information, and facilities. Authorization is the process of granting or denying access rights and permissions to a protected resource, such as a network, a system, an application, a function, or a file. A key component of granting or denying access rights is the concept of “least privilege” which is a basic principle for securing computer resources and information. This principle means that users are granted only those access rights and permissions they need to perform their official duties. To restrict legitimate users’ access to only those programs and files they need to do their work, organizations establish access rights and permissions. “User rights” are allowable actions that can be assigned to users or to groups of users. File and directory permissions are rules that regulate which users can access a particular file or directory and the extent of that access. To avoid unintentionally authorizing users’ access to sensitive files and directories, an organization must give careful consideration to its assignment of rights and permissions. Furthermore, National Institute of Standards and Technology (NIST) Special Publication 800-53 states that system access should be granted based on a valid access authorization and intended system usage and the most restrictive access needed by users for accounts, files, and directories needs to be enforced. Finally, FHFA policy requires that information systems enforce the most restrictive set of rights needed by users to perform their assigned duties. FHFA implemented numerous controls to prevent, limit, and detect logical access to its financial systems and information. For example, it enforced the use of (1) network user names and complex passwords, and (2) two- factor authentication for remote access to FHFA’s networks. In addition, wireless access to the network is prohibited inside the FHFA facilities unless approved by the Chief Information Officer or the Chief Information Security Officer. However, deficiencies in controlling logical access diminished the effectiveness of these controls and placed information resources at risk. For example, FHFA did not always maintain authorization records for network and system access, enforce the most restrictive access needed by users on shared network files and directories, and restrict access to sensitive system resources. To illustrate: FHFA did not maintain network access authorizations for every agency network user and authorization records contained notes that indicated records were incomplete. Specifically, the agency could not provide authorization for 20 of 30 users reviewed. If network and system access authorizations are not fully documented and monitored, increased risk exists that users may be granted unauthorized and unintended network and system access. FHFA established server files and directories that allowed network users to access agency and regulated-entity confidential information even though such users did not have a business need for this information. To illustrate, using network accounts with access privileges normally granted to all network end users, we were able to access sensitive and confidential regulatory information—including internal meeting notes, a mortgage market analysis, and a liquidity report for a regulated entity—on a server which hosted a FHFA examiner support system. Additionally, we were able to read documents labeled confidential on a shared drive. The network accounts were also unnecessarily given the rights to access and modify database files on a system the agency uses for financial analysis. By not restricting access to this confidential information to only personnel with an authorized need for access, FHFA risks the possibility that sensitive information could be used for unintended purposes, which could impact the ability of the agency to carry out its organizational mission. FHFA did not always sufficiently restrict system rights to only those needed by users to perform their assigned duties. For example, the agency did not sufficiently restrict user access to privileged accounts. Local user network accounts had rights that permitted the user to create new local workstation accounts and then escalate these accounts to have local administrator privileges. These accounts could then be used to create privileged accounts on other agency workstations by remotely connecting to them. This would allow malicious insiders to grant themselves or others access to sensitive information technology and communications resources. Local administrator accounts could also be used to install unauthorized software that could disrupt agency operations and capture various user credentials, such as those used to access the agency’s financial applications. The Chief Information Officer’s office stated that this deficiency existed because users were given privileged access to their workstations to facilitate the agency’s integration of its general support systems. It also stated the privileged access was only intended for temporary use and the fact that the access was not removed after the integration phase was completed was an error. FHFA informed us it is currently developing an access control procedure to revalidate user access levels for network and system access. FHFA plans to finalize this procedure as part of future phases of integrating its general support systems. According to agency officials, this should occur by June 2010. Officials also said that access has been restricted to (1) administrators, (2) application users, or (3) specific agency personnel based on input from information owners. However, until these control procedures are fully developed, effectively implemented, and continuously monitored, FHFA will remain at increased risk of individuals gaining unauthorized access to information resources. Physical security controls are important for protecting computer facilities and resources from espionage, sabotage, damage, and theft. These controls involve restricting physical access to computer resources and sensitive information, usually by limiting access to the buildings and rooms in which the resources are housed and periodically reviewing access rights granted to ensure that access continues to be appropriate based on established criteria. NIST policy requires that federal organizations implement physical security and environmental safety controls to protect employees and contractors, information systems, and the facilities in which they are located. FHFA policy also requires access controls for deterring, detecting, monitoring, restricting, and regulating access to areas housing sensitive IT equipment and information. FHFA effectively secured some of its sensitive areas and computer equipment and took other steps to provide physical security and environmental safety. For example, FHFA issued electronic badges to help control access to many of its sensitive and restricted areas. The agency also drafted procedures to guide staff in securing their office space and protecting sensitive information. In addition, the agency implemented environmental and safety controls such as temperature and humidity controls, as well as emergency lighting to protect its staff and sensitive IT resources. However, FHFA did not effectively (1) secure areas with IT equipment, (2) complete physical security and environmental control policies, (3) perform physical security risk assessments, (4) authorize and control physical access to resources and information, (5) detect potential security incidents, (6) implement a visitor control program, (7) enforce physical security safeguards, (8) secure locations that support computer operations, or (9) implement fire protection controls. Sensitive areas at FHFA were not sufficiently secured. NIST Special Publication 800-53 requires that federal organizations control physical access points, including designated entry and exit points, to the facility where information systems reside. NIST also requires that organizations enforce stringent physical access measures for areas within a facility containing large concentrations of information system components, such as server rooms and communications centers. NIST further requires that organizations position information system components in locations within its facilities to minimize the opportunity for unauthorized access. In addition, FHFA policy requires that access to its facilities housing sensitive IT equipment and information be limited to authorized personnel and that its employees take steps to prevent unauthorized access or disclosure of information. However, numerous instances existed in which FHFA did not sufficiently secure its facilities. During our testing, we were able to obtain unauthorized access from outside FHFA facilities into its interior space containing sensitive information and IT equipment. Entrance security. Security for building entrances was not sufficient. We were able to obtain unauthorized access to FHFA’s facilities on three different dates when we performed unescorted visits. Guards were either not on duty or did not inspect credentials and verify identities at each of the agency’s three downtown Washington, D.C., buildings. Two locations had concierge staff in their lobbies during regular business hours, but they did not require or check credentials. Agency staff were not present at these locations during early morning visits on two separate dates. A security officer was present during one visit and permitted us access with an expired badge. Guards on duty at one location did not require that we display identification during multiple visits to the facility. Further, no magnetometers or X-ray machines were available, nor did we observe visitors being searched at any location, creating the potential that an adversary could bring dangerous materials (e.g., firearms, explosives, or chemical and biological agents) into these facilities without being detected, challenged, or hindered from entering. Interior security. Office space at each of the three FHFA Washington, D.C., buildings containing sensitive documents and IT equipment was either unsecured or had very weak security features. We obtained entry to FHFA interior space by pushing on interior doors, using commonly available items to defeat security mechanisms, or walking behind employees. On one visit to office space at an agency location, we walked past inattentive guards who did not challenge us and walked through unsecured interior doors to obtain access. Inside the secured space, many agency staff left their offices unsecured, including some who left sensitive information on their desks. Computer room security. FHFA space containing sensitive computer equipment was not appropriately secured. We were able to obtain entry to an agency server room and storage area on three separate occasions by using commonly available items. This security deficiency was further compounded because the agency located the server room near an elevator area such that the public could easily obtain access to the general area where the server room is located. Because areas containing sensitive IT equipment and information were not appropriately secured, FHFA has less assurance that computing resources are protected from inadvertent or deliberate misuse including fraud or destruction. NIST Special Publication 800-53 requires that organizations develop formal documented physical security policies and procedures to facilitate the implementation of physical and environmental protection controls. NIST also requires that these policies be consistent with all applicable mandates and regulations. However, FHFA’s physical security and environmental control policies for the protection of its assets—including sensitive computer equipment, as well as employees, contractors, visitors, and the general public—were incomplete. FHFA policies did not adequately describe requirements for physically protecting IT equipment in sensitive locations. For example, FHFA policies did not describe how to respond to a physical security intrusion or report suspected or confirmed breaches in physical security; require that computer room authorization lists be periodically reviewed to determine if staff previously authorized access still require access or should be removed from the lists; and provide clear and consistent guidance for developing and implementing environmental safety controls, such as fire protection and emergency power and lighting for its facilities housing computer rooms. Until such policies are approved and implemented, FHFA has less assurance that its staff has sufficient and appropriate guidance to effectively and consistently protect its computing resources from inadvertent or deliberate misuse, including fraud or destruction. Identifying and assessing physical security risks are essential to determining what controls are required and what levels of resources should be expended on controls. NIST requires that organizations assess physical security risks to their facilities when they perform required risk assessments of their information systems. According to NIST Special Publication 800-30, the physical security environment of information systems should be considered when selecting cost-effective security controls. However, FHFA did not perform physical security risk assessments for its three Washington, D.C., facilities that house computer rooms and sensitive information. Although FHFA officials stated that the landlords of their leased facilities performed risk assessments, they acknowledged that the assessments did not cover the space FHFA uses nor did FHFA obtain and review those assessments. Until risk assessments are performed and used to help determine what physical security controls should be implemented, FHFA has less assurance that computing and other resources are consistently and effectively protected from inadvertent or deliberate misuse. NIST requires that organizations control all physical access points to its computer facilities and verify individual access authorizations. However, at one of its locations, FHFA did not fully control physical access authorizations to facilities containing sensitive computer resources and information and did not maintain a current list of personnel with authorized access to its facilities’ server rooms. Further, FHFA did not periodically review the authorization lists to determine if staff who were previously authorized access to the server rooms still required access or could be removed from the list. Several instances occurred where individuals inappropriately entered sensitive areas. For example: Seven individuals accessed four rooms containing IT equipment without Seven access cards with generic names were used to access two rooms containing sensitive IT equipment. FHFA was unable to identify who actually used the cards and accessed the rooms; Someone used a terminated employee’s access card seven times to access two rooms containing sensitive IT equipment. FHFA was unable to determine who used the card and accessed the rooms; and FHFA’s landlord for one facility had the ability to grant physical access to sensitive IT areas, and granted non-FHFA individuals access to the IT workroom without the agency’s knowledge. Physical access logs showed that five of the landlord’s staff were not on FHFA’s authorization list, but had entered the workroom without agency knowledge. As a result of these collective deficiencies, sensitive areas were accessed by unauthorized individuals and are at increased risk of further unauthorized access that could result in critical computing resources and sensitive information being inadvertently or deliberately misused or destroyed. NIST Special Publication 800-53 requires that organizations monitor physical access to their information systems to detect and respond to physical security incidents. For higher risk areas such as computer rooms, NIST requires organizations to monitor real-time intrusion alarms and surveillance equipment and/or employ automated mechanisms to recognize potential intrusions. FHFA policy also requires that controls be implemented to detect and monitor access to areas housing sensitive IT equipment and information. However, FHFA did not have processes and procedures, or in some instances, surveillance equipment, to monitor physical access to its Washington, D.C., computer rooms and areas containing sensitive documents so that it could detect and respond to physical security incidents. FHFA did not have monitoring or surveillance equipment, such as a closed circuit television at entrance doors, nor were the doors centrally or locally alarmed at two of the locations. Additionally, agency staff members were not reviewing access logs to sensitive IT areas, as required by NIST, and there was no procedure in place to guide such reviews. If agency staff had reviewed access logs, they may have been able to ascertain that unauthorized individuals were actually accessing agency computer rooms as discussed above. Further, the monitoring system that FHFA was using did not have the ability to generate physical access logs for the primary server room at one location. As a result, increased risk exists that unauthorized access and physical security incidents would not be detected or effectively investigated. NIST Special Publication 800-53 requires that organizations properly authenticate visitors before they can access facilities containing sensitive information systems. FHFA policy also requires that all visitors be escorted and sign in and out while visiting FHFA facilities, with these records being maintained for at least one year. As required by NIST, these records should include the name, signature, and organization of the visitor; form(s) of identification; date of access; times of entry and departure; purpose of the visit; and name/organization of the person visited. However, FHFA had no visitor control practices in place at one of its facilities. During three unaccompanied visits to this location we obtained access to and roamed freely throughout FHFA space without any identification or escort, and were not challenged by any staff. Further, FHFA did not require visitors to sign in or out, nor did it maintain visitor access records to its computer room or office space at one facility and its computer room at another facility. As a result, the agency was at increased risk of unauthorized visitors gaining access to sensitive areas and inadvertently or deliberately misusing or destroying critical computing resources. NIST Special Publication 800-53 requires that organizations control physical access to areas containing sensitive information and system devices. NIST also requires that organizations verify individual access authorizations before granting access to its facilities. However, FHFA employees did not always enforce physical security safeguards. For example, agency employees did not always use their badges to obtain access to electronically secured interior spaces. We observed agency staff who piggybacked into secured spaces when another individual held the door open for them on multiple occasions during three separate visits to FHFA locations. We also piggybacked into secured FHFA interior spaces behind other agency staff numerous times without any visible agency or visitor credentials. At no time were we challenged by FHFA staff and, in several cases, agency staff held doors open for us to allow our entry without authenticating our identity and authority. In addition, on three separate visits to one agency location, we easily opened entry doors by applying slight force and a local alarm sounded. However, agency employees who were in the area either did not notice or disregarded the alarm when we entered the area. Because its employees did not sufficiently enforce effective physical security, FHFA has less assurance that computing resources and sensitive information are protected from inadvertent or deliberate misuse. NIST Special Publication 800-53 requires that organizations control access to information systems distribution and transmission lines within organizational facilities and protect power equipment and power cabling for information systems from damage and destruction. However, FHFA did not secure two closets at one of its facilities that contain telecommunications wiring that supports its computer operations. FHFA also did not secure an electrical closet that contains power equipment and cabling at the same location. The power equipment controlled electrical power to FHFA’s server room and office space. The electrical closet also contained a large amount of miscellaneous construction materials. After we notified FHFA of this problem, agency personnel stated that they had secured the closets and agreed to remove the stored materials, but two subsequent reinspections showed that the electrical closet remained unsecured and cluttered with construction materials. Because these spaces were not sufficiently secured, FHFA has less assurance that computer operations are protected from inadvertent or deliberate misuse including fraud or destruction. FHFA did not adequately establish and implement controls to protect a server room containing sensitive IT equipment from potential fire damage. NIST Special Publication 800-53 requires that organizations employ and maintain fire suppression and detection devices for information systems. Agency policy also requires the use of controls to safeguard assets against various hazards including fire. However, FHFA did not have adequate fire suppression for its server room at one facility. According to FHFA staff, a fire suppression system was installed but did not function for over a year prior to our visit because repairs to the server room were required before the system could be activated. Subsequent to our visit, FHFA activated the fire suppression system in August 2009. Prior to this activation, sensitive IT equipment was at risk of damage which threatened the availability of critical information resources and information. To their credit, senior FHFA officials acknowledged these physical security and environmental safety control shortcomings and told us that they have taken steps or are planning to take steps to mitigate most of the deficiencies. However, until they fully implement physical security controls, FHFA computer facilities and resources remain vulnerable to espionage, sabotage, damage, and theft. A key reason for the information security deficiencies in FHFA’s information systems discussed previously is that it has not yet fully implemented its agencywide information security program to ensure that controls are appropriately designed and operating effectively. FISMA requires each agency to develop, document, and implement an information security program that, among other things, includes: policies and procedures that (1) are based on risk assessments, (2) cost effectively reduce information security risks to an acceptable level, (3) ensure that information security is addressed throughout the life cycle of each system, and (4) ensure compliance with applicable requirements; and plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency. In addition, FISMA requires that the agency information security program encompass the information and information systems supporting the operations and assets of the agency that are provided or managed by another agency, contractor, or other source. FHFA has made important progress in developing and documenting its policies and procedures for the agency’s information security program. For example, it has published an Information Security Policy Handbook. The agency has begun putting procedures from the handbook in place and expects to fully implement these in fiscal year 2010. FHFA also developed and issued the agency’s Breach Notification Policy and Plan for security incidents involving personally identifiable information. The agency also addressed security-related weaknesses for systems noted in the prior year OFHEO and FHFB FISMA reviews and completed a review to validate and document system configurations. FHFA also maintained current security certification and accreditations on major financial systems that we reviewed. The certification and accreditation packages included evidence that FHFA tested management, operational, and technical controls and prepared security plans for its networks, facilities, and systems. According to FHFA, the agency also upgraded its Security Log Management System to monitor production servers and network device logs and security events. In addition, as part of a risk management approach to manage information technology assets, the agency implemented comprehensive scanning of production systems on a monthly basis to identify and correct system vulnerabilities. During the year, the agency expanded and improved its information security awareness training, providing a required automated training program to all employees and contractors. However, policies, procedures, plans, and technical standards related to information security did not always reflect the current agency operating environment; and FHFA did not always effectively monitor its systems. A key task in developing an effective information security program is to establish and implement policies, procedures, plans, and technical standards that govern security over an agency’s computing environment. Developing, documenting, and implementing security policies are the primary mechanisms by which management communicates its views and requirements; these policies also serve as the basis for adopting specific procedures and technical controls. According to NIST Special Publication 800-53, these policies should include separation of incompatible duties, configuration management policies and procedures, and contingency plans. Configuration management is an important control that involves the identification and management of security features for all hardware and software components of an information system at a given point and systematically controls changes to that configuration during the system’s life cycle. Establishing controls over the modification of information system components and related documentation helps to prevent unauthorized changes and ensure that only authorized systems and related program modifications are implemented. This is accomplished by instituting policies, procedures, and techniques that help make sure all hardware, software, and firmware programs and program modifications are properly authorized, tested, and approved. Contingency planning is another critical component of information protection. If normal operations are interrupted, network managers must be able to detect, mitigate, and recover from service disruptions while preserving access to vital information. A contingency plan is used to detail emergency response, backup operations, and disaster recovery for information systems. To be effective, these plans need to be clearly documented, communicated to potentially affected staff, and updated to reflect current operations. NIST also recommends continuity of operations and disaster recovery plans. If properly implemented, policies and procedures should help reduce the risk that could come from unauthorized access or disruption of services. Technical security standards can provide consistent implementation guidance for each computing environment. Although FHFA made important progress in developing and documenting elements of its information security program, its policies, procedures, plans, and technical standards related to separation of duties, configuration management, and continuity of operations do not reflect the current operating environment. For example: While FHFA had a separation of incompatible duties policy in place from the former FHFB, the agency did not develop and document procedures for enforcing separation of duties. Agency officials stated that the agency has initiated a project to develop processes for the 18 security control families identified by NIST and will integrate separation of duties procedures into these processes; the expected completion date is June 2010. The agency did not finalize and approve configuration management policy and procedures. FHFA is using an interim change control and configuration process that was used at FHFB and has developed a draft configuration management procedure; however, it has not been formalized and approved. Agency officials stated that a plan has been developed to train users and implement FHFA configuration management policy and procedures by May 2010. Although FHFA has developed continuity of operations and disaster recovery plans, it has not formalized and approved them. Agency officials stated that a continuity of operations plan has been submitted to the senior agency leadership for review and comment and will be tested in May 2010. Based on the test results, it will be updated and finalized during the fourth quarter of fiscal year 2010. Also, a draft disaster recovery plan was approved in November 2009. The agency expects to test the plan in the summer of 2010. In addition to actions mentioned above, agency officials indicate that FHFA will develop or update policies and procedures to reflect the current environment and to comply with NIST guidance by June 2010. Until the agency effectively develops, documents, and implements these policies, procedures, plans, and technical standards, it has less assurance that its systems and information are protected from unauthorized access or disruption of services. FISMA states that each agency shall develop, document, and implement an agencywide information security program to provide information security for the information and information systems that support the operations and assets of the agency, including those provided or managed by another agency, contractor, or other source. The act specifically delineates federal agency responsibilities for (1) information collected or maintained by or on behalf of an agency and (2) information systems used or operated by an agency, by a contractor of an agency, or by another organization on behalf of an agency. Appropriate policies and procedures should be developed to ensure that the activities performed by external third parties are documented, agreed upon, implemented, and monitored for compliance. FHFA did not perform effective oversight of the contractor’s implementation of the security controls and program. Although FHFA developed a financial oversight document for FMS that outlined the assignment of activities between FHFA and the BPD, it did not develop or implement a procedure to monitor access to agency financial information by BPD or Oracle Corporation staff and contractors. As a result, increased risk exists that contractors or other users with privileged access could gain unauthorized access to or improperly use agency financial systems, applications, and information. In addition, FHFA did not have a procedure to assess security reviews and plans of action and milestones that were conducted and documented by BPD or Oracle Corporation staff and contractors. While FHFA officials asserted that the agency randomly investigated some of the security reviews and plans of action and milestones, the agency lacked a documented process for reviewing BPD’s and Oracle Corporation’s compliance with FHFA requirements. As a result, FHFA may not have assurance that the contractors are fully complying with security requirements. FHFA informed us that it has initiated or has actions planned to fully implement effective oversight of contractors’ adherence to its information security program. Specifically, a procedure to monitor security control compliance is under development and FHFA expects it to be finalized in June 2010. However, until all key elements of its information security program are fully implemented, FHFA may not have assurance that its controls are appropriately designed and operating effectively. Securing the information systems and information on which FHFA depends to carry out its mission requires that the agency establish, implement, and reinforce policies, procedures, and guidance. The agency has implemented numerous logical and physical access controls to safeguard financial systems and information and has instituted key components of an information security program. However, deficiencies in logical and physical access controls unnecessarily increased risk to FHFA’s systems and key activities of its information security program were either not fully implemented or were absent. Until the agency strengthens its logical access and physical access controls and fully implements an information security program that includes policies and procedures reflecting the current agency environment, increased risk exists that sensitive information and resources will not be sufficiently protected from inadvertent or deliberate misuse, improper disclosure, or destruction. To help strengthen access controls and other information system controls over key financial systems, information, and networks, we recommend that the Acting Director of the Federal Housing Finance Agency implement the following 16 recommendations for strengthening logical access controls, physical access controls, and the agency’s information security program. To improve logical access controls, we recommend that the Acting Director ensures FHFA: (1) maintains network access authorizations for every agency (2) reviews current access to network files and directories containing confidential information and restricts access to personnel with an authorized need to access that information; and (3) continuously monitors use of privileged accounts on systems throughout the network so inadvertent or extended use of privileged access is promptly detected and removed. To strengthen controls over physical access, we recommend that the Acting Director ensures FHFA: (4) secures areas that contain IT equipment and sensitive (5) completes sufficient physical security policies to address protection of agency assets, including incident response, access authorizations, and environmental safety controls; (6) performs physical security risk assessments at key facilities; (7) develops, documents, and implements monitoring procedures to ensure that physical access authorizations to secure areas containing sensitive computer resources, including server rooms and sensitive information, are current and controlled; (8) develops, documents, and implements monitoring procedures and installs appropriate equipment to ensure that FHFA can detect and respond to potential physical security incidents; (9) implements and enforces visitor control practices at all facilities; (10) increases employees’ awareness of the need to enforce physical security safeguards; and (11) secures and removes construction materials from telecommunications and electrical closets that support computer operations. To improve its information security program, we recommend that the Acting Director ensures FHFA: (12) develops, documents, and implements procedures enforcing separation of incompatible duties among personnel; (13) finalizes, approves, and implements configuration management (14) approves and tests continuity of operations and disaster recovery (15) develops, documents, and implements procedures to monitor access to agency financial information by BPD and Oracle Corporation staff and contractors; and (16) develops, documents, and implements procedures to assess all security reviews and plans of action and milestones developed by BPD and Oracle Corporation staff and contractors. In providing written comments (reprinted in app. II) on a draft of this report, the Acting Director of the Federal Housing Finance Agency stated that FHFA agreed with our findings and will strengthen controls to reduce risk in the areas where we identified control deficiencies. He also noted that FHFA has already addressed or is in the process of addressing all the recommendations to strengthen controls over key financial systems, information, and networks. Further, the Acting Director stated that the agency was moving forward to strengthen and complete implementation of its information security program. This report contains recommendations to you. As you know, 31 U.S.C. sec. 720 requires the head of a federal agency to submit a written statement of the actions taken on our recommendations to the Senate Committee on Homeland Security and Governmental Affairs and to the House Committee on Oversight and Government Reform not later than 60 days from the date of the report and to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this report. Because agency personnel serve as the primary source of information on the status of recommendations, GAO requests that the agency also provide us with a copy of your agency’s statement of action to serve as preliminary information on the status of open recommendations. We are sending copies of this report to the Chairman and Ranking Member of the Senate Committee on Banking, Housing, and Urban Affairs; the Chairman and Ranking Member of the House Committee on Financial Services; the Chairman of the Federal Housing Finance Oversight Board; the Secretary of the Treasury; the Secretary of Housing and Urban Development; the Chairman of the Securities and Exchange Commission; the Director of the Office of Management and Budget; and other interested parties. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report or need assistance in addressing these issues, please contact Gregory C. Wilshusen at (202) 512- 6244 or Dr. Nabajyoti Barkakati at (202) 512-4499 or by e-mail at [email protected] or [email protected]. Contacts for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. The objective of our review was to determine whether controls over key financial systems were effective in ensuring the confidentiality, integrity, and availability of financial information. This review was performed in connection with our audit of the Federal Housing Finance Agency’s (FHFA) financial statements for the purpose of supporting our opinion on internal controls over the preparation of those statements. To determine whether controls over key financial systems were effective, we tested information security controls at FHFA. We concentrated our evaluation primarily on threats focused on critical applications and their general support systems that directly or indirectly support the processing of material transactions that are reflected in the agency’s financial statements. Our evaluation was based on our Federal Information System Controls Audit Manual, which contains guidance for reviewing information systems. Using National Institute of Standards and Technology guidance, and FHFA’s policies, procedures, practices, and standards, we evaluated controls by analyzing network and system share authorizations for agency network inspecting key devices to determine whether critical patches had been installed or were up-to-date; visiting the agency’s three office buildings in Washington, D.C., on five different dates between July and September 2009 to observe and test physical access controls to determine if computer facilities and resources were being protected from inappropriate access by unauthorized individuals; and examining access responsibilities to determine whether incompatible functions were segregated among different individuals. Using the requirements identified by the Federal Information Security Management Act, which established key elements for an effective agencywide information security program, we evaluated FHFA’s implementation of its security program by analyzing agency policies, procedures, practices, and technical standards to determine whether sufficient guidance was provided to personnel responsible for securing information and information systems; analyzing security plans to determine if management, operational, and technical controls were planned or in place and that security plans were updated; analyzing test plans and test results for key agency systems to determine whether management, operational, and technical controls were based on risk and tested at least annually; examining contingency plans for key agency systems to determine whether those plans had been tested or updated; and analyzing FHFA’s risk assessment process and risk assessments for key agency systems to determine whether risks and threats were documented. We also reviewed or analyzed our previous reports and reports from the Department of the Treasury Office of Inspector General; and discussed with key security representatives and management officials whether information security controls were adequately designed, in place, and operating effectively. We performed our work at FHFA facilities in Washington, D.C., and at financial application servicing and commercial hosting facilities in Parkersburg, West Virginia, and Austin, Texas. The work was conducted from February 2009 to April 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objective. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. In addition to the individuals named above, Charles Vrabel (Assistant Director), Edward Alexander (Assistant Director), Angela Bell, Bradley Becker, Debra Conner, Kirk Daubenspeck, Sharhonda Deloach, Rebecca Eyler, Rosanna Guerrero, Kevin Metcalfe, Eugene Stevens IV, Michael Stevens, and Christopher Warweg made key contributions to this report.
The Federal Housing Finance Agency (FHFA) relies extensively on computerized systems to carry out its mission to provide effective supervision, regulation, and housing mission oversight of the Federal National Mortgage Association (Fannie Mae), the Federal Home Loan Mortgage Corporation (Freddie Mac), and the federal home loan banks. Effective information security controls are essential to ensure that FHFA's financial information is protected from inadvertent or deliberate misuse, disclosure, or destruction. As part of its audit of FHFA's fiscal year 2009 financial statements, GAO assessed the effectiveness of the agency's information security controls to ensure the confidentiality, integrity, and availability of the agency's financial information. To do this, GAO examined FHFA information security policies, procedures, and other documents; tested controls over key financial applications; and interviewed key agency officials. Although FHFA has implemented important information security controls, it has not always implemented appropriate controls to sufficiently protect the confidentiality, integrity, and availability of financial information stored on and transmitted over its key financial systems, databases, and computer networks. The agency's financial system computing environment had deficiencies in several areas and the controls that were in place were not always effectively implemented to prevent, limit, and detect unauthorized access to the agency network and systems. Specifically, FHFA did not always maintain authorization records for network and system access, enforce the most restrictive access needed by users on shared network files and directories, and enforce the most restrictive set of rights needed by users to perform their assigned duties. Further, it did not effectively implement physical protection and environmental safety controls over its facilities and information technology resources. GAO identified numerous instances in which FHFA facilities were not adequately secured and was able to obtain unauthorized access from outside agency facilities into the agency's interior space containing sensitive information and information technology equipment. FHFA officials acknowledged these shortcomings and indicated that the agency has taken steps or is planning to take steps to mitigate these deficiencies. A key reason for the control deficiencies in FHFA's financial system computing environment is that the agency has not yet fully implemented its agencywide information security program to ensure that controls are appropriately designed and operating effectively. Although FHFA made important progress in developing and documenting elements of its information security program, written policies, procedures, and technical standards do not reflect the current operating environment. Further, the agency has not yet developed, documented, and implemented sufficient policies and procedures to ensure that the activities performed by external third parties are monitored for compliance with FHFA's policies. Although these deficiencies were not considered significant deficiencies for financial reporting purposes, if left uncorrected they unnecessarily increase the risk that sensitive and financial information is subject to unauthorized disclosure, modification, or destruction.
The purpose of SORNA is to protect the public from sex offenders and offenders against children by providing a comprehensive set of sex offender registration and notification standards. These standards require convicted sex offenders, prior to their release from imprisonment or within 3 days of their sentencing if the sentence does not involve imprisonment, to register and keep the registration current in the jurisdictions in which they live, work, and attend school, and for initial registration purposes only, the jurisdiction in which they were convicted. Registration generally entails the offender appearing in person at a local law enforcement agency and the agency collecting information such as name, address, Social Security number, and physical description of the offender, among other items. The registration agency also is to document, among other items, the text of the provision of law defining the criminal offense for which the offender is registered; the criminal history of the offender, including dates of all arrests and convictions; and any other information the Attorney General requires. In addition, implementing jurisdictions are to maintain a jurisdiction-wide sex offender registry and adopt registration requirements that are at least as strict as those SORNA established. The length of time that convicted sex offenders must continue to update their registration is life, 25 years, or 15 years, depending on the seriousness of the crimes for which they were convicted and with possible reductions for maintaining a clean record. The frequency with which sex offenders must update or verify their information—either quarterly, semiannually, or annually—also depends on the seriousness of the crime. Once sex offenders register or update their registration in their jurisdictions, under the act, implementing jurisdictions are to provide the new information to FBI’s National Sex Offender Registry (NSOR). NSOR is a national database within the FBI’s National Crime Information Center (NCIC) that federal, state, local, territorial, and tribal law enforcement officials can access to obtain information on registered sex offenders throughout the United States. Jurisdictions’ receipt of certain federal grant funds is conditioned upon whether they have “substantially implemented” SORNA, and, as we have previously reported, jurisdictions are in various stages of implementing the act. Pursuant to the Attorney General’s authority to interpret and implement SORNA, the SMART Office developed SORNA guidelines specifically related to registered sex offenders traveling internationally. For example, under DOJ’s National Guidelines, each jurisdiction in which a sex offender is registered as a resident is instructed to require the sex offender to inform the jurisdiction if the sex offender intends to commence residence, employment, or school attendance outside of the United States. The jurisdiction needs to then (1) notify all other jurisdictions in which the offender is required to register through immediate electronic forwarding of the sex offender’s registration information, and (2) notify the U.S. Marshals—the primary federal agency responsible for investigating sex offender registration violations under SORNA—and update the sex offender’s registration information in the national databases pursuant to the procedures under SORNA § 121(b)(1). Also, under DOJ’s Supplemental Guidelines, jurisdictions are directed to have sex offenders report international travel 21 days in advance of such travel and submit information concerning such travel to the appropriate federal agencies and databases. Furthermore, per the SMART Office’s SORNA Implementation Document, in order to provide the most helpful information to U.S. Marshals and other law enforcement agencies, DOJ’s guidelines require jurisdictions to collect passport information in addition to other travel information, such as itinerary details, purpose of travel, criminal records, and contact information within the destination country, regarding a registered sex offender’s intended international travel. Currently, according to officials from the SMART Office, DOJ will not reduce grant funds for jurisdictions that have not yet implemented the supplemental guidelines on registered sex offenders traveling internationally, because DOJ is allowing jurisdictions additional time to implement the supplemental guidelines as part of its assessment of whether jurisdictions have “substantially implemented” SORNA. Under SORNA, the responsibility for establishing a system for informing jurisdictions about persons entering the United States who are required to register is divided among three federal departments— DHS, DOJ, and State—with DOJ being the lead agency. Additionally, in 2008, the SMART Office created the IWG, which consists of multiple agencies within DOJ, DHS, and State, to discuss issues related to identifying registered sex offenders traveling internationally.SORNA, ICE’s Homeland Security Investigations (HSI) division, consistent with its objective to target transnational sexual exploitation of children, developed the Angel Watch program. The purpose of this program is to provide advance notice to foreign officials when a registered sex offender who committed a crime against a child is traveling from the United States to a foreign country. Table 1 describes the functions of the federal agencies that play a role in identifying registered sex offenders traveling internationally. Three federal agencies—U.S. Marshals, USNCB, and ICE—use information from state, local, territorial, and tribal jurisdictions, as well as passenger data from CBP, to determine whether registered sex offenders are traveling outside of the United States. Similarly, five federal agencies—USNCB, ICE, U.S. Marshals, Consular Affairs, and CBP—may be notified of registered sex offenders traveling to the United States through several means, including tips from foreign officials or when CBP queries the registered sex offender’s biographic information at a port of entry and finds that the offender has a criminal history. However, none of these sources provides complete or comprehensive information on registered sex offenders leaving or returning to the United States. For example, because CBP’s passenger data are based on information from private or commercial air, commercial vessels, and voluntary reporting from rail and commercial bus lines; and CBP does not routinely query individuals who leave the United States by commercial bus, private vessel, private vehicle, or by foot, it is unable to provide information on all individuals leaving the country. In addition, foreign officials do not always monitor when a registered sex offender is returning to the United States. The FBI is establishing an automated notification process that is expected to address some of these limitations. However, because ICE has not requested to receive the automated notifications, ICE will not be notified of registered sex offenders who leave the country via a land port of entry. Officials from the U.S. Marshals and USNCB said that they use information from state, local, territorial, and tribal jurisdictions, and officials from the U.S. Marshals and ICE said that they use air and sea passenger data from CBP, to determine whether registered sex offenders are traveling internationally, but both mechanisms have limitations. The information that the U.S. Marshals and USNCB receive from jurisdictions about registered sex offenders traveling internationally is limited, in part because (1) some jurisdictions do not require sex offenders to inform them of international travel and (2) those jurisdictions that do require notice must rely on sex offenders to self-report this information. Consistent with the Attorney General’s authority under SORNA to require sex offenders to provide additional information for inclusion in the jurisdiction’s registry than what the act requires, DOJ’s Supplemental Guidelines added that registered sex offenders must provide jurisdictions 21 days advance notice of any international travel, and jurisdictions are to notify the U.S. Marshals of any registered sex offenders traveling internationally. According to the U.S. Marshals, to support these jurisdictions’ efforts to provide more complete and consistent information, in February and March 2012, the SMART Office asked jurisdiction registry officials, and the U.S. Marshals and USNCB asked relevant jurisdictional law enforcement agencies, to submit a Notification of International Travel form to the U.S. Marshals. This form includes the traveler’s name, passport number, travel information, criminal record, and contact information in the destination country. However, not all jurisdictions have elected to implement the DOJ guideline requiring registered sex offenders to provide advance notice of international travel. Specifically, of the 50 jurisdictions that responded to our survey question about advance notice of international travel requirements, 28 reported that they require sex offenders to provide such advance notice, whereas the other 22 do not, primarily because their jurisdiction’s laws do not permit them to do so. For example, 1 jurisdiction said that because its statute requires registered sex offenders to notify the registry within 72 hours after international travel, officials are not authorized to collect this information in advance. Moreover, some jurisdictions have difficulty obtaining information on traveling registered sex offenders on a consistent basis because jurisdictions must rely on sex offenders to self-report, and jurisdictions have limited mechanisms in place to enforce the self-reporting requirement. For example, sex offender registry officials in 1 jurisdiction we visited said that they would not know that a registered sex offender failed to self-report international travel unless they conducted an address verification operation, which would enable them to determine that the sex offender is traveling. Senior officials from the SMART Office stated that they are pleased that 28 jurisdictions have already implemented the advance notice provision, considering that the guidance for the provision was not released until January 2011. These officials also stated that they continue to provide technical assistance to jurisdictions seeking to implement this provision. Information on registered sex offenders traveling internationally that the U.S. Marshals and ICE obtain from CBP’s review of passenger data also has limitations. CBP, as part of DHS, has the mission to secure the United States’ borders while facilitating legitimate trade and travel. To help fulfill that mission, CBP established NTC, which, among other things, receives and reviews air and sea passenger data to determine whether persons entering or leaving the country via a commercial airline or cruiseline are on the terrorist watchlist, are wanted, or have a warrant out for their arrest. NTC officials stated that in 2009, they met with the U.S. Marshals to determine how they could support efforts under way at the newly formed NSOTC. NTC agreed to review passenger data to determine whether any persons leaving the country are registered sex offenders. Since then, according to NTC officials, they have provided the U.S. Marshals information, such as name, date of birth, destination, and offense, on all registered sex offenders NTC identifies from passenger data so that the U.S. Marshals can verify that the sex offender did not violate any registration requirements. NTC officials stated that they also use this information to identify registered sex offenders who remain in a foreign country for an extended period of time and return to the United States for short periods of time, because this may be an indication that the individual is circumventing SORNA requirements by falsely reporting their place of residency. NTC provides this information to ICE and U.S. Marshals for possible investigation or other law enforcement action. Figure 1 shows the primary methods by which the U.S. Marshals, ICE, and USNCB receive information on registered sex offenders traveling internationally. While the information NTC provides may be helpful, it has limitations. First, CBP collects and analyzes information on individuals leaving the United States via private or commercial airline, commercial vessel, and voluntary reporting from rail, but does not routinely query individuals who leave the United States by commercial bus line, private vessel, private vehicle, or by foot . Since travelers departing by commercial rail, commercial bus line, private vessel, private vehicle or by foot are not required to report travel information in advance of their travel, CBP may be unable to provide advance targeting and analysis of these individuals. However, a CBP officer may access information on these individuals by querying their biographical information during special outbound operations at port of entry. It is CBP’s policy that CBP officers query individuals leaving the country only if there is a special operation underway, such as an operation to verify the amount of currency taken out of the United States. According to NTC officials, CBP officers at the land port of entry are not required to provide NTC with the results of their queries because they are only required to pass information related to individuals on the terrorist watchlist. Therefore, NTC is generally not able to inform the U.S. Marshals and ICE about registered sex offenders leaving the country through means such as land ports or on a privately chartered boat. Second, to determine whether a registered sex offender is on a particular flight, NTC determines whether any of the passenger data, such as name and date of birth, match any of the data in the FBI’s NCIC. However, NCIC may not always have complete information to enable NTC to determine if there is a match, in part because jurisdictions may enter information incorrectly or not at all because certain fields are not mandatory. In this case, NTC checks electronic public sex offender registries—which are not always up to date—to collect missing information, or calls relevant registry officials—which could take additional time. Five federal agencies—USNCB, ICE, the U.S. Marshals, Consular Affairs, and CBP— have several mechanisms in place to identify registered sex offenders returning to the United States. For example, USNCB officials stated that their foreign counterparts, to the extent that they are aware, may notify U.S. officials of registered sex offenders returning to the United States. In addition, U.S. Marshals officials stated that they sometimes receive information from NTC on registered sex offenders returning to the United States. According to NTC officials, they are able to provide this information to the U.S. Marshals analyst stationed at NTC to the extent that the sex offender’s entire itinerary and flight information are available. However, these mechanisms do not identify all of the registered sex offenders returning to the United States all of the time. For example, even though USNCB may receive information on some returning registered sex offenders through its foreign counterparts, the information these officials provide is based on anonymous tips or offenders’ self-reported information. According to USNCB officials, even though hundreds of registered sex offenders traveled outside of the United States from August through September 2012, as we discuss later in this report, USNCB rarely received notifications of these registered sex offenders returning to the United States. Table 2 describes the mechanisms by which federal agencies become aware of registered sex offenders traveling back to the United States and the limitations of those mechanisms. To help ensure that relevant federal agencies are more consistently notified of registered sex offenders leaving or returning to the United States, in 2008, the SMART Office established the IWG. The IWG is charged with developing an international tracking system to identify registered sex offenders leaving and returning to the country and immediately relay this information to appropriate domestic law enforcement agencies for any additional action as needed, such as to initiate an investigation. Specifically, FBI officials stated that, in collaboration with other IWG member agencies, they are developing a process that will send an automated notification to the U.S. Marshals’ NSOTC and registry and law enforcement officials in the jurisdictions where the sex offender is registered: (1) when a registered sex offender has purchased an airline or cruise ticket for international travel, (2) 1 week before the registered sex offender is scheduled to travel by commercial air or sea transport, and (3) when a CBP officer queries that person’s biographic information at a U.S. port of entry, such as any U.S. airport. The automated notification, if implemented as intended, will provide the U.S. Marshals and relevant jurisdictions with information on registered sex offenders returning to the United States whose biographic information is queried by CBP officers at air, sea, and land ports of entry, assuming these offenders enter the country legally and their identifying information in NCIC, such as date of birth, is accurate and complete. In addition, FBI officials stated that the automated notification is expected to provide relevant jurisdictions with information on sex offenders registered in their jurisdiction who did not self-report international travel. This will help law enforcement officers to avoid using resources to search for sex offenders who they thought had absconded, when the offender had actually left the country on personal travel. According to FBI officials, the FBI vetted the automated notification proposal through its Advisory Policy Board; the FBI Director approved the proposal in June 2012; and FBI officials estimate that they will be able to implement the automated notification as early as March 2013. FBI officials responsible for implementing the automated notification said that they are currently working with CBP to include additional information from CBP’s systems in the automated notifications, such as the specific ports of entry and the mode of transportation offenders are using. The FBI will not delay implementation of the automated notification to incorporate the additional information from CBP; instead, the FBI will incorporate this information into the automated notifications at a later date, if necessary. While the automated notification will address some of the limitations discussed previously, it will not address all of them. For example, according to FBI officials, the automated notification will provide notice to the U.S. Marshals and jurisdictions of all registered sex offenders leaving or returning to the United States for whom CBP officers query their biographic information at a port of entry. Consequently, the automated notification will not provide notice of a registered sex offender who plans to leave the country via a land port of entry because CBP generally does not query information for these travelers. CBP officials explained that CBP officers may query biographic information for individuals leaving the United States through a land port of entry—such as in the case of a special operation to verify the amount of currency taken out of the country—but generally do not do so because of regulatory, policy, and infrastructure limitations in monitoring individuals leaving the United States. notification is intended to address the federal government’s current limitations in identifying registered sex offenders traveling internationally. According to FBI officials responsible for implementing the automated notification, they have had preliminary discussions with Canadian Police Information Center officials as to whether every person that enters Canada through the U.S.-Canada land border will be queried in NCIC. Of the 17 jurisdictions that reported receiving information on registered sex offenders entering the United States from a federal agency, 10 reported receiving information from the U.S. Marshals and USNCB, respectively; 8 reported receiving it from ICE; and 2 reported receiving it from CBP and State, respectively. Some of the responses reflect jurisdictions receiving information from more than one federal agency. Jurisdictions could possibly use information on registered sex offenders traveling to their jurisdictions from abroad to help them identify the current location of these offenders. For example, officials from one local law enforcement agency we visited stated that receiving such notifications would help officers verify whether the offenders have returned from foreign travel when officers conduct address verifications. In addition, this information would help jurisdictions fulfill their requirements under SORNA to protect the public from sex offenders. Once the automated notification system is operational, jurisdictions that have registered the sex offender and entered a record into NCIC will be notified that an offender has returned to the United States. Having this information will allow these jurisdictions to implement public safety measures more consistently. To help ensure that they obtain as complete information as possible regarding registered sex offenders traveling internationally, the U.S. Marshals and ICE will continue to request information from jurisdictions or NTC even after the automated notification is operational. Currently, the U.S. Marshals and ICE do not consistently receive information on registered sex offenders entering or leaving the country via a land port of entry because NTC does not have this information and jurisdictions receive this information only to the extent that sex offenders self-report it. The automated notification will fill this information gap, in part, by sending notices about registered sex offenders entering and leaving the country via a land port of entry, to the extent that CBP queries the biographical information of the offender, in addition to providing notices about registered sex offenders traveling internationally via commercial air and sea transport. Although the automated notification will provide information on a greater number of traveling registered sex offenders than the number that jurisdictions and NTC provide, as shown in table 4, NTC provides more details on a specific traveler than the automated notification. Further, jurisdictions that collect offenders’ self reported data may also be able to provide more details. Therefore, according to U.S. Marshals officials, they find it beneficial to continue to receive information from each of these two sources. According to an ICE section chief responsible for the Angel Watch program, ICE has not requested to receive the automated notifications because it prefers to rely on information NTC provides, which meets ICE’s specific needs. In particular, an NTC analyst, after identifying a registered sex offender with plans to travel internationally via commercial air or sea transport, conducts further analysis to determine whether the offender committed a crime against a child. This ICE chief stated that ICE does not want information on all types of registered sex offenders, which is what the automated notification would provide, but only on those who have committed crimes against children, in accordance with ICE’s mission to investigate the sexual exploitation of children. However, by not requesting to receive the automated notification, ICE will not have information on registered sex offenders who committed offenses against children, left the country via a land port of entry, and had their biographical information queried at the port. According to the FBI, in order to receive the automated notification, ICE would have to submit a request to FBI’s Advisory Policy Board; and given that the board meets twice a year, it could take approximately 1 year or more for the board to approve an agency’s request to receive alerts from the system. The FBI also explained that the automated notification will not be able to distinguish between traveling registered sex offenders who committed offenses against children and those who committed offenses against adults because the notifications are derived from NCIC data, and the age of the victim is not a required field in this system. Therefore, if ICE were to receive the automated notification, ICE would have to determine on its own whether the offenders leaving the country through a land port of entry committed an offense against a child. However, according to NTC officials, about 90 percent of the registered sex offenders they identified in fiscal year 2012 who planned to travel internationally via commercial air or sea transport had committed offenses against children. We have previously reported that collaborating agencies can look for opportunities to address resource needs by leveraging each others’ resources, which could include receiving the automated notification, and obtaining additional benefits that would not be available if they were working separately. By electing not to receive the automated notifications, ICE will not receive information on registered sex offenders traveling to Canada or Mexico via a land port of entry whose biographical information is queried. This is of particular concern considering that, according to ICE, Mexico is one of the countries to which registered sex offenders travel most frequently. If ICE were to receive alerts from the automated notification, we recognize that some effort would be required to determine whether sex offenders leaving the country through a land port of entry committed an offense against a child. However, the level of effort required, and whether or not the benefits of the effort would outweigh the cost, cannot be determined at this time. USNCB and ICE inform foreign officials when registered sex offenders are traveling to their countries to enable these officials to take actions that they deem appropriate to ensure public safety. USNCB and ICE notify their own unique counterparts in foreign countries about traveling sex offenders for similar purposes, such as enabling them to make decisions as to whether they will admit sex offenders into their country. In addition USNCB and ICE notify these counterparts for different purposes. For example, ICE counterparts may monitor the whereabouts of sex offenders while they are in the foreign country. USNCB and ICE base such notifications on different information sources; USNCB uses information it receives from the U.S. Marshals and jurisdictions, and ICE uses information it receives from NTC’s passenger data reviews as part of ICE’s Angel Watch program. However, the U.S. Marshals do not consistently share information with USNCB on traveling sex offenders, and USNCB and ICE do not share the information they receive on traveling sex offenders with each other. As a result, USNCB and ICE were not able to notify their foreign counterparts about a large number of registered sex offenders traveling internationally from August to September 2012, and some of the notifications were not as comprehensive as possible. USNCB notifies its INTERPOL counterparts in other countries about registered sex offenders traveling internationally. Similarly, ICE, through its Angel Watch program, notifies its foreign law enforcement counterparts about sex offenders traveling internationally who had committed an offense against a child. According to USNCB and ICE officials, USNCB and ICE send these notices to different agencies within the foreign countries, but for similar purposes—to enable foreign officials to decide whether they want to admit the registered sex offender into their country or take other public safety measures they deem appropriate. For example, with regard to the United Kingdom, USNCB notifies its INTERPOL counterpart—the United Kingdom National Central Bureau— which is hosted by the Serious Organised Crime Agency (SOCA), a law enforcement body that fights organized crime. SOCA officials then make decisions about how to use this information. They could share it with agencies such as the United Kingdom (U.K.) Border Agency, which is responsible for refusing entry to persons who do not qualify, or the U.K. Metropolitan Police Service (MPS), which interviews registered sex offenders to establish exactly what their plans are while in the United Kingdom and where they will be staying upon entry or if admitted. On the other hand, according to ICE officials, ICE notifies the sex offender unit within the U.K. Metropolitan Police Service as well as the U.K. Border Agency directly through its attachés posted abroad about registered sex offenders traveling to the United Kingdom who committed an offense against a child. Of the six countries included in our review, three generally do not admit registered sex offenders, and in one country, even though it generally admits registered sex offenders, foreign law enforcement officials monitor the activity of the sex offender while in country. For example, ICE Angel Watch program officials reported that in 2012, an ICE attaché notified foreign officials in advance that a registered sex offender was traveling from the United States to their country; and as a result, the foreign officials denied entry to the registered sex offender. Appendix II provides information on registered sex offenders traveling internationally who were refused entry by foreign countries. USNCB and ICE identified reasons why it is advantageous that they both notify foreign officials of sex offenders traveling internationally. USNCB officials explained that they have been trying to encourage their INTERPOL counterparts to inform them about individuals convicted of sex offenses in their countries who are traveling to the United States. Therefore, it is important for USNCB to provide such notifications if it expects its counterparts to reciprocate. ICE officials explained that it is important for their ICE attachés to inform their foreign law enforcement counterparts about traveling registered sex offenders to assist the counterparts with tracking offenders visiting that country, such as by developing a shared spreadsheet designed to help the country establish its own sex offender registry, and to monitor sex offenders’ activities while in that country. USNCB provides more comprehensive information on sex offenders’ travel plans to its INTERPOL counterparts than ICE provides to its foreign law enforcement counterparts, and the additional information that USNCB has could help support ICE’s mission. USNCB bases its notifications on information that it receives from jurisdictions that require registered sex offenders to provide advance notice of international travel, whereas ICE bases its notifications on information it receives from NTC’s analysis of commercial air and sea passenger data. As previously discussed, jurisdictions that require advance notice may collect more information on each sex offender’s travel plans—such as hotel information—than NTC does. In addition, neither USNCB nor ICE has provided its foreign counterpart with as many notices of traveling registered sex offenders as it potentially could. Specifically, as shown in figure 2, from August 1 through September 30, 2012, USNCB notified its counterparts of 105 offenders that ICE did not provide to its counterparts. Further, 82 of these 105 notifications (78 percent) were for registered sex offenders who had committed offenses against children. Similarly, ICE notified its counterparts of 100 offenders that USNCB did not provide to its counterparts. There are several reasons why USNCB and ICE generally do not have information to share on the same sex offenders traveling internationally. First, USNCB generally does not receive information on traveling sex offenders from NTC, whereas ICE does. This is in part because the U.S. Marshals has not passed on all of the information it has obtained from NTC on registered sex offenders to USNCB. We have previously reported that collaborating agencies should consider if participants have full knowledge of the relevant resources in their agency.this guidance, in March 2012, the U.S. Marshals assigned one of its investigators to be co-located with USNCB officials in order to provide USNCB with information on sex offenders for whom USNCB would send Consistent with green notices to its foreign INTERPOL counterparts. U.S. Marshals officials then realized that they had additional information on traveling registered sex offenders that could be of interest to USNCB, and starting in August 2012, the U.S. Marshals investigator was to begin providing USNCB information on traveling registered sex offenders that the U.S. Marshals receives from NTC. However, we found that from August through September 2012, the U.S. Marshals only provided USNCB with information on 39 of the 169 traveling sex offenders of whom the U.S. Marshals was aware based on information from NTC. According to U.S. Marshals officials, the U.S. Marshals analyst posted at NTC may not be informing USNCB about all registered sex offenders traveling internationally that NTC has identified because the analyst’s primary purpose is to identify and pursue potential SORNA violations— instances in which a registered sex offender is in violation of registration requirements by traveling internationally without providing advance notice. As a result, by the time the analyst finishes looking into potential SORNA violations, some of the registered sex offenders that NTC identified may have already completed their international travel; the U.S. Marshals investigator posted at USNCB would not notify USNCB about these offenders because the opportunity would have passed for USNCB to provide advance notice to its foreign counterparts about these offenders. Officials further explained that it takes time to complete the Notification of International Travel form for each traveling sex offender that NTC identifies, which may also prevent the investigator from notifying USNCB prior to the sex offender initiating travel. U.S. Marshals officials also stated that they would generally not provide USNCB with information on registered sex offenders whose international travel is less than 3 days. However, USNCB officials stated that they send notifications to their counterparts on all traveling registered sex offenders, regardless of travel duration or ability to provide advance notice. U.S. Marshals officials explained that they did not receive any additional resources to help bridge the gap between the information that NTC and USNCB obtain on registered sex offenders traveling internationally, but volunteered to help remedy this issue with limited existing resources. While the U.S. Marshals’ intentions are commendable, USNCB still does not have access to information on most of the registered sex offenders traveling internationally that NTC identifies, thus precluding USNCB from notifying its foreign counterparts about these individuals and enabling them to make informed public safety decisions. A second reason why USNCB and ICE do not have information on the same traveling sex offenders could be that USNCB receives information on registered sex offenders traveling internationally from jurisdictions, whereas ICE does not. Third, according to a senior ICE official, ICE may have received information on additional traveling sex offenders, but did not send notifications via Angel Watch because of constrained manpower or insufficient information on the child exploitation conviction, among other things. According to USNCB officials, they copy several other federal agencies on their notifications to foreign officials, including FBI’s Innocent Images National Initiative and the State Department’s Bureau of Diplomatic Security (DS) which may choose to take further action.officials stated that they share information on registered sex offenders traveling internationally with their regional security officers, who may inform other U.S. government and foreign law enforcement officials in- country, as they deem appropriate. However, USNCB officials reported that they do not coordinate their notifications with ICE, in part because their understanding was that ICE was interested in registered sex offenders traveling internationally only if the offender was the subject of For example, DS an ICE investigation; USNCB officials stated that they were not aware that ICE’s primary interest in obtaining information on these offenders was to notify their foreign law enforcement counterparts. We have previously reported that collaborating agencies can look for opportunities to address resource needs by leveraging each others’ resources and obtaining additional benefits that would not be available if they were working separately. According to senior ICE officials responsible for the Angel Watch program, the additional information USNCB collects and provides to its counterparts could also help support ICE’s efforts. In particular, these officials stated that the relevant ICE attaché could share the additional information with that person’s foreign counterpart to support efforts to deny entry or monitor activity of registered sex offenders. USNCB officials stated that it would be feasible to include Angel Watch program officials on the notifications USNCB sends to foreign counterparts. Taking steps to ensure that USNCB and ICE have information on the same registered sex offenders traveling internationally—which could entail, for example, the two agencies copying one another on notifications to their foreign counterparts, or USNCB receiving information directly from NTC—could help ensure that USNCB and ICE are providing more comprehensive information on traveling registered sex offenders to their foreign counterparts to help inform public safety decisions. Cases in which individuals who had previously been convicted of a sex offense in the United States subsequently traveled overseas to commit an offense against a child underscore the importance of sex offender registration and notification standards to help ensure public safety in the United States and abroad. Some of the limitations federal agencies have faced with regard to identifying registered sex offenders leaving and returning to the United States are expected to be addressed by the automated notification the FBI is currently developing. However, ICE has not requested to receive the automated notification, which may preclude it from identifying entire categories of sex offenders, such as those entering and returning to the United States via a land port of entry whose biographical information is queried. USNCB, U.S. Marshals, and ICE have taken steps to coordinate their efforts to identify registered sex offenders traveling internationally, such as participating in the IWG and collocating staff. However, despite these efforts, these agencies still do not have access to all of the information on traveling registered sex offenders that they could potentially receive. Sharing additional information could help ensure that these agencies are providing more comprehensive information on traveling registered sex offenders to their foreign counterparts to help inform public safety decisions. Given ICE’s objective to target the transnational sexual exploitation of children, after the automated notification becomes operational, the Director of ICE should direct ICE Homeland Security Investigations officials to coordinate with FBI Criminal Justice Information Services officials to collect and analyze information that will enable ICE to determine if the benefits of receiving the automated notifications outweigh the costs. The type of information ICE may consider collecting as part of this analysis could include the number of notifications generated for sex offenders leaving the country via a land port of entry. To ensure that USNCB and ICE are providing more comprehensive information to their respective foreign counterparts regarding registered sex offenders traveling internationally, we recommend that the Attorney General and the Secretary of Homeland Security take steps to help ensure that USNCB and ICE have information on the same number of registered sex offenders as well as the same level of detail on registered sex offenders traveling internationally. Such steps could include USNCB and ICE copying each other on their notifications to their foreign counterparts or USNCB receiving information directly from NTC. We provided a draft of this report for review and comment to DHS, DOJ, and State. We received written comments from DHS and USNCB, within DOJ, which are reproduced in full in appendices III and IV, respectively. DHS generally agreed with our recommendations in its comments, and USNCB agreed with our recommendations with additional observations. State did not provide written comments on the draft report. We also received technical comments from DHS and DOJ, which were incorporated throughout our report as appropriate. In its written comments, USNCB agreed with our recommendation that the Attorney General and the Secretary of Homeland Security take steps to help ensure that USNCB and ICE have the same information on registered sex offenders traveling internationally. USNCB noted that it has already begun the process of establishing points of contact with the appropriate ICE personnel so that USNCB can include ICE in its dissemination of sex offender notifications. USNCB also identified additional actions which were beyond the scope of our review, such as the need for technical improvements to streamline data sharing and foreign notification processes. In addition, USNCB stated that there needs to be an impetus for all states to substantially implement the guidelines set forth by the SMART Office on traveling registered sex offenders. During the course of our review, officials from the SMART Office stated that they have taken some actions, such as conducting workshops and providing technical assistance, to encourage jurisdictions to implement the requirement for registered sex offenders to report international travel 21 days in advance of such travel. DHS agreed with our recommendations that ICE should assess whether receiving the automated notifications would benefit their mission to target transnational sexual exploitation and that DOJ and DHS should take steps to ensure that USNCB and ICE have the same information on traveling registered sex offenders. However, in its letter, DHS questioned whether the automated notifications would be of use to the Angel Watch program because the timing of some of the notifications would not enable ICE to notify foreign officials in advance that a sex offender is traveling to their country, in which case the foreign officials could choose not to admit the offender. Nevertheless, in addition to admissibility decisions, foreign law enforcement officials with whom we spoke stated that they use the information they receive from ICE for multiple purposes, including determining how frequently the sex offender travels to that country, where the offender stays while in country, and where to direct their resources to monitor sex offenders. DHS also raised concerns that given the hundreds of thousands of individuals leaving the United States via the southwest border on a daily basis, handling notifications on sex offenders leaving the country through this border may be untenable. However, it is uncertain how many of these individuals are sex offenders and how many of them will be queried by CBP when exiting the country. Therefore, it will be important for ICE to implement our recommendation so that once the automated notification process is underway, ICE can obtain the necessary information to determine if the number of notifications of sex offenders exiting the country through a land port of entry is manageable. We are sending copies of this report to the appropriate congressional committees, the Attorney General, the Secretary of Homeland Security, the Secretary of State, and other interested parties. This report is also available at no charge on GAO’s web site at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512- 6510 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff acknowledgments are provided in appendix V. Since 2006, Congress has passed legislation and the Department of Justice (DOJ) has promulgated regulations to help ensure that federal, state, local, territorial, and tribal officials are aware of when registered sex offenders travel internationally. To determine the extent to which these officials have procedures in place to implement these requirements, we addressed the following questions: (1) How and to what extent does the federal government determine whether registered sex offenders are leaving or returning to the United States? (2) How and to what extent have federal agencies notified foreign officials about registered sex offenders traveling internationally? To address both objectives, we identified legislation, regulations, and other guidance that directs federal agencies’ efforts to identify registered sex offenders leaving or returning to the United States. Section 128 of the Sex Offender Registration and Notification Act of 2006 (SORNA), directs the Attorney General, in consultation with the Secretary of State and the Secretary of Homeland Security, to establish a system for informing domestic jurisdictions about persons entering the United States who are required to register under SORNA (referred to as registered sex offenders). Further, SORNA makes it a federal crime for a sex offender required to register under SORNA to travel to foreign countries and knowingly fail to register or update a registration in the United States. Additionally, under DOJ guidance, jurisdictions are required to have registered sex offenders report international travel 21 days in advance and to submit information concerning such travel—such as expected itinerary, departure and return dates, and means and purpose of travel— to the appropriate federal agencies. In order to assess how federal agencies obtain information on registered sex offenders leaving and returning to the United States, we obtained documentation from and interviewed members of the International Tracking of Sex Offenders Working Group (IWG), which is composed of representatives from various components within DOJ, the Department of Homeland Security (DHS), the Department of State (State), and the Department of Defense (DOD). The IWG was tasked with developing mechanisms to comply with statutory and regulatory requirements for identifying registered sex offenders leaving and returning to the United States. We reviewed the IWG’s proposals for such mechanisms, which were documented in a white paper prepared by the IWG in December 2010. We then interviewed officials from three of the federal departments represented on the IWG to obtain information on the mechanisms by which they identify registered sex offenders leaving and returning to the country, any limitations of these mechanisms, and what steps could be taken to address these limitations. Those agencies are the following: Office of Sex Offender Sentencing, Monitoring, Apprehending, Registering, and Tracking (SMART Office) Federal Bureau of Investigation (FBI) United States Marshals Service (U.S. Marshals) International Criminal Police Organization (INTERPOL) Washington – U.S. National Central Bureau (USNCB) Department of Homeland Security U.S. Customs and Border Protection (CBP) U.S. Immigration and Customs Enforcement (ICE) Bureau of Consular Affairs (CA) Bureau of Diplomatic Security (DS) We excluded DOD from our review because under SORNA, the departments responsible for dealing with registered sex offenders traveling abroad were identified as DOJ, DHS, and State. We also interviewed and surveyed relevant state, local, and territorial officials to determine what role, if any, they play in informing the federal government of registered sex offenders leaving the country, and how, if at all they become aware of registered sex offenders returning to the country, and how they use that information to help ensure public safety. We first conducted a screening survey of officials from all 56 jurisdictions—the 50 states, the District of Columbia, and the 5 territories, excluding tribal territories, that are eligible to implement SORNA. We contacted jurisdiction officials identified by the SMART Office as being responsible for implementing SORNA in the jurisdictions to determine whether they require registered sex offenders to provide advance notice of international travel and whether they share information with relevant federal agencies on registered sex offenders leaving or returning to the country. These officials included representatives of state police departments or attorney general offices. We pretested the survey with 2 jurisdictions, distributed the survey by e-mail, and received responses from all 56 jurisdictions. Subsequently, of those jurisdictions that responded that they require sex offenders to provide advance notice of international travel, we selected 4 jurisdictions—Maryland, Florida, Michigan, and Arizona—to conduct site visits and 1 jurisdiction (New Mexico) to conduct interviews. During the site visits we obtained additional information on how they implemented and enforced the requirement and shared information with relevant federal agencies. We chose these jurisdictions based on (1) variation in the extent of international travel from the jurisdiction; (2) percentage of the population that is composed of sex offenders; and (3) whether the state has land and sea ports of entry, in addition to airports, to cover the various modes by which sex offenders could enter and leave the country. During the site visits, we met with officials from the following federal, state, and local law enforcement agencies: U.S. Marshals, ICE, and CBP (at air, land, and sea ports of entry), state agencies responsible for maintaining the state sex offender registry, and local law enforcement agencies responsible for registering and monitoring sex offenders. During the site visits, we determined what actions were taken by state jurisdictions after the federal government informed them of sex offenders returning to their jurisdiction, particularly if the jurisdiction was not aware that the individual had left the country. Furthermore, we gathered information from jurisdictions on any actions that can be taken to improve their efforts to identify registered sex offenders leaving or returning to the United States. While the perspectives from the officials we interviewed during site visits cannot be generalized to all jurisdictions, they provided valuable insights about registered sex offenders traveling internationally. We also developed and administered a second survey of the same officials from the 56 jurisdictions to obtain more detailed information on the extent to which jurisdictions implement the 21-day advance notice requirement and inform federal agencies of registered sex offenders leaving the country. The survey also included questions related to jurisdictions’ perspectives on any challenges or improvements needed regarding receiving or providing information about sex offenders leaving or returning to the United States, in addition to other issues related to the implementation of SORNA. To develop this survey, we designed draft questionnaires in close collaboration with a GAO social science survey specialist and conducted pretests with 4 jurisdictions to help further refine our questions, develop new questions, clarify any ambiguous portions of the survey, and identify any potentially biased questions. Log-in information for the web-based survey was e-mailed to all participants, and we sent two follow-up e-mail messages to all nonrespondents and contacted the remaining nonrespondents by telephone. We received responses from 52 out of 56 jurisdictions. Additionally, during our interviews with the IWG agencies, we asked whether any of these agencies use the information they obtain on registered sex offenders leaving and returning to the country to help ensure public safety. For the three agencies identified as having responsibility for taking action based this information—U.S. Marshals, ICE, and USNCB, we obtained and analyzed data on the number of registered sex offenders they received from August 1 through September 30, 2012 of registered sex offenders traveling internationally. We chose this time period because we wanted to assess the effectiveness of a process the U.S. Marshals instituted in August 2012 for sharing information with USNCB on registered sex offenders traveling outside of the United States. We then asked USNCB and ICE to provide us with the notifications they sent to foreign officials about the registered sex offenders who traveled outside of the United States for the same time period. We also analyzed the data to determine the extent to which there was any fragmentation (i.e. circumstances in which more than one federal agency is involved in the same broad area of national interest) or duplication (i.e. two or more agencies or programs are engaged in the same activities or provide the same services to the same beneficiaries) with regard to the notices. Specifically, we analyzed and compared the data provided by U.S. Marshals, ICE and USNCB to determine the extent to which the information these agencies had on sex offenders who planned to travel outside of the country was similar or different. We also assessed the similarities and differences in the notifications sent by USNCB and ICE to their foreign counterparts. We assessed the reliability of the data the agencies provided by questioning knowledgeable agency officials and reviewing the data for obvious errors and anomalies. We determined that the data were sufficiently reliable for our purposes. Furthermore, we contacted federal and foreign officials in select countries to obtain information on how they learn of registered sex offenders traveling from the United States to the countries in which they are located; how, if at all, they use this information to help ensure public safety; and any limitations or benefits of receiving this information. The countries we selected are Australia, Canada, Mexico, the Philippines, Thailand, and the United Kingdom. We selected Mexico, the Philippines, and Thailand because, on the basis of data we obtained from ICE, these are among the countries most frequented by child sex tourists—that is, individuals who travel to another country for the purpose of engaging in inappropriate sexual activity with a child. We selected Australia, Canada, and the United Kingdom because they are known to have national sex offender registries, similar to those of the United States, and have expressed an interest in receiving information from the U.S. government on sex offenders traveling to their countries. For each of these countries, we reached out to the ICE attachés stationed in country as well as a representative from the country’s national law enforcement agency. The perspectives of these officials are not representative, but provide valuable insights. We conducted this performance audit from January 2012 to February 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our analysis based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our analysis based on our audit objectives. CBP’s National Targeting Center (NTC) reviews air and sea passenger data to identify registered sex offenders who plan to travel internationally. NTC shares this information with the U.S. Marshals and ICE. The U.S. Marshals then refers these travelers to USNCB, and USNCB sends notifications to its counterparts via INTERPOL to foreign countries so that these countries can take action they deem appropriate to help ensure public safety, such as refusing entry. Figure 3 shows, according to NTC, how many registered sex offenders NTC identified and referred to USNCB (through the U.S. Marshals) who were ultimately refused entry by the foreign country during fiscal year 2012. In addition to the contact named above, Kristy Brown, Assistant Director; Su Jin Yon, Analyst-in-Charge; and Alicia Loucks made significant contributions to the report. Other key contributors were Susan Baker, Gary Bianchi, Frances Cook, Anthony DeFrank, Heather Dunahoo, Michele Fejfar, Eric Hauswirth, Richard Eiserman, Lara Miklozek, Linda Miller, Anthony Moran, Sheena Smith, Julie Spetz, and John Vocino.
In recent years, certain individuals who had been convicted of a sex offense in the United States have traveled overseas and committed offenses against children. GAO was asked to review what relevant federal agencies—including DOJ, DHS, and the Department of State—are doing with regard to registered sex offenders traveling or living abroad. This report addresses the following questions: (1) How and to what extent does the federal government determine whether registered sex offenders are leaving or returning to the United States? (2) How and to what extent have federal agencies notified foreign officials about registered sex offenders traveling internationally? GAO analyzed August and September 2012 data from the U.S. Marshals, USNCB, and ICE on registered sex offenders who traveled internationally. GAO also interviewed relevant agency officials and surveyed officials from all 50 states, 5 territories, and the District of Columbia to determine the extent to which they identify and use information on traveling sex offenders. Three federal agencies--U.S. Marshals, International Criminal Police Organization (INTERPOL) Washington - U.S. National Central Bureau (USNCB), and U.S. Immigration and Customs Enforcement (ICE)--use information from state, local, territorial, and tribal jurisdictions, as well as passenger data from the U.S. Customs and Border Protection (CBP), to identify registered sex offenders traveling outside of the United States. Similarly, these agencies may be notified of registered sex offenders traveling to the United States through several means, including tips from foreign officials or when CBP queries the registered sex offender's biographic information at a port of entry and finds that the offender has a criminal history. However, none of these sources provides complete or comprehensive information on registered sex offenders leaving or returning to the United States. For example, CBP does not routinely query individuals who leave the United States by commercial bus, private vessel, private vehicle, or by foot, in which case CBP may not be able to determine if any of these individuals are registered sex offenders. In addition, foreign officials do not always monitor when a registered sex offender is returning to the United States. The Federal Bureau of Investigation (FBI), working with other agencies, is developing a process that will address some of these limitations. Specifically, the FBI will send an automated notice to the U.S. Marshals and law enforcement officials in the jurisdictions where the sex offender is registered that the offender is traveling, to the extent that the offender's biographical information is queried at the port of entry. However, because ICE has not requested to receive the automated notifications, ICE will not be notified of registered sex offenders who leave the country via a land port of entry whose biographical information is queried. USNCB and ICE have notified foreign officials of some registered sex offenders leaving and returning to the country, but could increase the number and content of these notifications. USNCB notifies its foreign INTERPOL counterparts about registered sex offenders traveling internationally, and ICE notifies its foreign law enforcement counterparts about traveling sex offenders who had committed an offense against a child. USNCB provides more detailed information than ICE because USNCB uses offenders' self-reported travel information that some jurisdictions collect, which may include details such as hotel information. Since ICE uses passenger data, it does not have these details. Also, data from August 1 to September 30, 2012, showed that the two agencies had significant differences in the number of offenders they identified in notifications. USNCB sent notifications on 105 traveling sex offenders that ICE did not, and, conversely, ICE sent notifications on 100 traveling sex offenders that USNCB did not. In part this is because the two agencies rely on different information sources and do not share information with one another. Taking steps to ensure that these agencies have all available information on the same registered sex offenders traveling internationally could help ensure that the agencies are providing more comprehensive information to their foreign counterparts to help inform public safety decisions. GAO recommends that ICE consider receiving the automated notifications and DOJ and DHS take steps to ensure that USNCB and ICE (1) have information on the same number of traveling registered sex offenders and (2) have access to the same level of detail about each traveling registered sex offender. USNCB within DOJ and DHS concurred with our recommendations.
The “defense industrial base” includes all commercial and government- owned facilities that are responsible for the design, production, delivery, and maintenance of military weapon systems, subsystems, and components or parts that fulfill U.S. military requirements. The portion of the defense industrial base that is assigned to and forms an essential part of DOD’s organization is referred to as the “organic defense industrial base.” The organic defense industrial base consists of resource providers, acquisition and sustainment planners, and manufacturing and maintenance performers, such as DOD’s government-owned manufacturing arsenals and maintenance depots. These government- owned and -operated installations, including the three manufacturing arsenals, provide services for a variety of customers, including the Army, the Navy, the Air Force, and some non-DOD agencies and foreign countries. The Army’s Industrial Operations activity group, a subset of the organic defense industrial base, includes the Army’s manufacturing arsenals, maintenance depots, ammunition plants, and storage sites. According to the Army’s Organic Industrial Base Strategic Plan 2012- 2022, workforces and infrastructures of the Army Industrial Operations activity group are to be sized and adjusted accordingly over time to sustain critical manufacturing and core depot capabilities to support war- fighting equipment during current and future contingency operations. Within the Army, the Office of the ASA (ALT) serves in an oversight capacity and is responsible for establishing the policy and goals for the Army’s industrial base program. The three manufacturing arsenals are operated by the Army, managed by AMC, and under the direct command and control of the Army’s Life Cycle Management Commands. Each manufacturing arsenal is aligned with a command that oversees the kind of work done at that arsenal. Specifically, Rock Island and Watervliet Arsenals are aligned with the TACOM Life Cycle Management Command and its mission of developing, acquiring, fielding, and sustaining ground systems. The work performed at Pine Bluff Arsenal is aligned with the Joint Munitions Command Life Cycle Management Command, the logistics integrator for life-cycle management of ammunition. While the manufacturing arsenals are under the Army’s operational control, organizations within the Office of the Secretary of Defense (OSD) perform policy, planning, program, and resource management functions for the industrial base, which includes the arsenals. Within OSD, the Office of the Assistant Secretary of Defense for Logistics and Materiel Readiness serves as the principal logistics official within senior DOD management. This office prescribes policies and procedures for the conduct of logistics, maintenance, materiel readiness, strategic mobility, and sustainment support within DOD. For example, the Office of the Deputy Assistant Secretary of Defense for Maintenance Policy and Programs, under the authority of the Office of the Assistant Secretary of Defense for Logistics and Materiel Readiness, held primary responsibility for consolidating and submitting DOD’s September 2014 report to Congress. Additionally, the Defense Logistics Agency (DLA) is DOD’s logistics combat support agency, whose primary role is to provide supplies and services to America’s military forces and sometimes procures items from the manufacturing arsenals. Figure 1 shows the structure of DOD’s manufacturing arsenal organization, including relevant DOD entities. Each manufacturing arsenal has been designated by the Secretary of the Army as a Center of Industrial and Technical Excellence. This designation provides authority under section 2474 of Title 10 U.S. Code to partner with and lease facilities to industry on programs relating to core maintenance and technical expertise. Pine Bluff Arsenal is designated as a Center of Industrial and Technical Excellence for chemical and biological defense equipment. Rock Island Arsenal Joint Manufacturing and Technology Center is designated as a Center of Industrial and Technical Excellence for mobile maintenance systems; foundry operations; and add-on armor design, development, and prototype fabrication. Watervliet Arsenal Joint Manufacturing and Technology Center is designated as a Center of Industrial and Technical Excellence for manufacturing cannon and mortar systems. Figure 2 shows the manufacturing arsenals’ geographic locations within the United States and their missions. The manufacturing arsenals are subject to various legislative provisions that affect the work they do and how this work is allocated. These include section 4532 of Title 10 U.S. Code, commonly referred to as the Army Arsenal Act, and other statutes that authorize the establishment of public-private partnerships, including direct sales, research and development, and facilities use agreements, any of which may affect how much work arsenals undertake. The Army Arsenal Act specifically requires the Army to have its supplies made in U.S. government factories or manufacturing arsenals, provided that they can produce the supplies on an economical basis. The economical basis determination, according to the Army Arsenal Act’s implementing guidance, is based on a comparison of the government’s manufacturing costs with the cost of purchasing the supplies commercially—a process commonly referred to as the “make-or-buy” analysis. More broadly, section 2535 of Title 10 U.S. Code, which applies to all the armed forces, declares that it is Congress’s intended policy that “to the maximum extent practicable, reliance will be placed on private industry for support of defense production.” There are also a number of authorities that the Army’s manufacturing arsenals may use to establish commercial-like relationships with other entities. For example, section 2474 of Title 10 U.S. Code authorizes the establishment of public-private partnerships between Army manufacturing arsenals and private entities. Additionally, section 4544 of Title 10 U.S. Code is an example of a direct sales statute, which gives the manufacturing arsenals the authority to enter into cooperative agreements—such as sales and leasing contracts—with non-Army entities, both public and private. Funding for the manufacturing arsenals is managed through the Army Working Capital Fund. Section 2208 of Title 10 U.S. Code authorizes the Secretary of Defense to establish Working Capital Funds to finance inventories of supplies and industrial activities that provide common services such as repair, manufacturing, or remanufacturing. When a private entity, such as the prime integrator of a major weapon system, or a government component or agency places an order with a manufacturing arsenal for equipment or services, payments are made to the Working Capital Fund on a reimbursable basis. Charges for goods and services provided through the fund include the full costs of the goods and services provided and amounts set for the depreciation of capital assets. According to DOD financial reports, in fiscal year 2014, the combined total operating expenses incurred at the three manufacturing arsenals— which were applied to the Army Working Capital Fund—were approximately $400 million. The Army’s Industrial Operations activity group sets the rates customers will pay the manufacturing arsenals for equipment or services they order, on a direct labor-hour basis. The process for setting rates begins 18 months before the start of the fiscal year in which the manufacturing will be performed. Based on the anticipated level of future work, the managers of the manufacturing arsenals propose hourly rates that will allow them to recover all of their operating costs. Then, the proposed business plan and rates and the manufacturing arsenal budget are approved through the Army chain of command. Ultimately, the rates are set by the Department of the Army and DOD. Although the goal of the Army Working Capital Fund—unlike the goal of a profit-oriented commercial business—is to be self-supporting by recovering only the cost of supplies, services performed, and applicable administrative expenses, a manufacturing arsenal may end the year with more or less resources than it had originally anticipated—depending on whether or not its actual costs and workload over the fiscal year were as forecasted. In such cases, there may be a rate increase in a subsequent year in an effort to offset the losses of a prior year or a rate reduction to offset gains. Additionally, every year the Army is required to include, in annual budget documents submitted to Congress to support the President’s fiscal year budget request, an estimate of funds required in that fiscal year to cover the costs of unutilized or underutilized plant capacity at Army arsenals. This funding is referred to as Industrial Mobilization Capacity funding. However, Congress may or may not appropriate funds specifically for this purpose from one year to the next. According to data provided by DOD, two of the manufacturing arsenals—Pine Bluff and Watervliet—received funding through the Industrial Mobilization Capacity account, a subaccount of the Army’s Working Capital Fund, every fiscal year from 2000 through 2006—and Rock Island Arsenal received funds from fiscal year 2001 through 2007—to cover the costs of unutilized or underutilized plant capacity. In fiscal years 2008 through 2013, the manufacturing arsenals did not receive funding for Industrial Mobilization Capacity, although appropriations specifically for this purpose were again made in fiscal year 2014 in the amount of $150 million. Additionally, in fiscal year 2015, Congress appropriated $225 million to the Working Capital Fund for maintaining competitive rates at arsenals, a purpose which is distinct from covering costs of underutilized or unutilized plant capacity. The levels of work at the manufacturing arsenals have changed over time. Specifically, according to the Secretary of the Army, there has been a precipitous drop in demand for Army materiel that has resulted in a decline in workload and an increase in overhead rates at the manufacturing arsenals. Army personnel data for these arsenals indicates that, during fiscal years 2000 through 2002 (prior to the start of operations in Iraq), combined total workload at the three manufacturing arsenals ranged from approximately 1.5 million to 1.7 million direct labor hours each year. During fiscal years 2003 through 2012 (during operations in Iraq and Afghanistan), the combined total workload at the three manufacturing arsenals each year ranged from approximately 1.6 million to 3.0 million direct labor hours. In fiscal years 2013 and 2014 (after operations in Iraq had ended), the combined total workload each year ranged from approximately 1.2 million to 1.7 million direct labor hours. Because the manufacturing arsenals operate under the Army Working Capital Fund, they must include all costs of running the installation—such as costs for security and facility maintenance—in the rates they charge customers. As a result, when the volume of work decreases, as it did in fiscal years 2013 and 2014, those fixed costs of operation must be spread over a shrinking base of work; the result is an increased cost per unit and higher rates. For example, according to a briefing developed by AMC and presented to Congress in July 2014, during fiscal years 2013 and 2014, the stabilized rate at Pine Bluff increased from approximately $126 per hour to approximately $135 per hour, which represented approximately a 7 percent increase. In the same time frame, the stabilized rate at Rock Island increased from approximately $112 per hour to approximately $137 per hour—approximately a 23 percent increase. Also during this time frame, the stabilized rate at Watervliet increased from approximately $195 per hour to approximately $202 per hour— approximately a 4 percent increase. According to AMC’s briefing, in July 2014 the Army projected that stabilized rates for fiscal year 2015 would have been significantly higher without the infusion of Arsenal Sustainment Initiative funds to attempt to make arsenal rates more competitive. For example, the Army projected that the stabilized rate for Rock Island would increase to approximately $285 per hour, while the stabilized rate at Watervliet was projected to increase to approximately $400 per hour— approximately double Watervliet’s rate for fiscal year 2014. Officials at the manufacturing arsenals told us that even though they actively market the arsenals’ capabilities to DOD program officials, as their rates increase, the arsenals lose even more customers and workload. These officials explained that this results in a continuing cycle of decreasing requirements and increasing rates, a pattern which several DOD officials we interviewed referred to as a “death spiral.” Over the past 3 years, DOD has taken various actions to assign work to the manufacturing arsenals, but these actions have not generated sufficient revenue to recover their operating expenses. Moreover, we found that DOD may not always appropriately consider the manufacturing arsenals as a source of manufacture in a given situation, because it does not have clear, step-by-step implementing guidance on how to conduct make-or-buy analyses to determine whether to procure an item from the arsenals or the private sector. In response to the manufacturing arsenals’ inability to generate sufficient revenue to recover their operating expenses, Congress appropriated funds in fiscal years 2014 and 2015 to help recover the arsenals’ operating expenses and allow them to maintain competitive rates. Since 2012, DOD has taken various actions to assign work to the manufacturing arsenals. For example, in December 2012 ASA (ALT) issued a memorandum directing Program Executive Officers, Program Managers, and Product Support Managers in the Army’s acquisition community to use a market research tool called the Materiel Enterprise Capabilities Database when conducting research to determine whether an item should be made at a manufacturing arsenal or bought from the private sector. The tool, available to all DOD components, provides access to information on the capabilities that are available at each of the manufacturing arsenals. According to Army officials, the intent of this effort is to make the Army and other DOD components more aware of the manufacturing arsenals’ capabilities, in the hope that such increased awareness will lead these organizations to send the arsenals more work. Additionally, in an effort to encourage Program Executive Officers and Program Managers to take advantage of the manufacturing arsenals’ capabilities, ASA (ALT) issued a memorandum in May 2013 directing Program Executive Officers to report annually on the work they have provided to the arsenals. The memo also stated that, when possible, decisions to use the manufacturing arsenals, contractor support, or some combination of arsenals and contractor support should occur early in the acquisition process, for example, during market research or when conducting make-or-buy analyses. Subsequently, in April 2014, the Secretary of the Army directed AMC to work directly with DLA to develop a plan and schedule to “make the manufacturing arsenals a DLA source of supply” for Army-related manufacturing requirements. The secretary’s memorandum noted that it was the Army’s position that all Army-related parts in the DLA inventory that the manufacturing arsenals were capable of manufacturing should first be obtained from the arsenals. In June 2014, DLA determined that it had statutory authority to order from the Army’s manufacturing arsenals when procuring supplies for the Army, but noted that the Army would still need to conduct make-or-buy analyses and provide DLA with a list of items that must be manufactured at the arsenals. In July 2014, AMC requested that DLA “make the manufacturing arsenals a primary source of supply” for a list of 133 items, such as mounting plates and brackets, that the arsenals identified as items they had manufactured previously. In fiscal year 2014—as part of its implementation of section 8141 of the Fiscal Year 2014 Consolidated Appropriations Act directing the Secretary of the Army to assign sufficient work to the manufacturing arsenals to maintain their critical capabilities—ASA (ALT) reviewed Program Executive Officers’ portfolios to identify work that could be directed to the arsenals in order to reduce the likelihood of a rate increase and help to maintain critical capabilities. ASA (ALT) directed that a program management review be conducted every 6 months on the status of the originally identified workload, which included 26 projects—such as the M320 grenade launcher—to be directed to the manufacturing arsenals during fiscal years 2015 and 2016. Finally, as part of its September 2014 Report on Manufacturing Arsenal Study, DOD stated that it would continue to encourage the arsenals to use public-private partnerships. These public-private partnerships include, for example, commercial tenants who either rent space or provide services in kind to the manufacturing arsenals. The intent of the partnerships is to reduce the arsenals’ overhead, maintenance, and product costs. As of October 2014, AMC had also developed a draft implementation guide for a business development and partnership program to coordinate partnering management, with the objective of helping to positively affect the net operating result of AMC activities. The various actions that DOD has taken to assign work to the manufacturing arsenals, as described above, have not generated sufficient revenue to recover the arsenals’ operating expenses and do not ensure that DOD is appropriately considering the arsenals as a source of manufacture. While DOD’s efforts to assign work have increased revenue, the increases have been small relative to the manufacturing arsenals’ operating expenses. For example, DLA officials told us that as of August 2014, DLA had provided approximately $10 million of work to the manufacturing arsenals in fiscal year 2014. This work generated enough revenue to recover approximately 2 percent of the arsenals’ total expenses in that time frame. AMC’s Assistant Deputy Chief of Staff for Logistics Integration explained that, as of August 2014, the manufacturing arsenals were engaged in 22 public-private partnerships that yielded a total of approximately $11 million in revenue for the arsenals. This income would recover approximately 3 percent of the manufacturing arsenals’ total expenses in fiscal year 2014. Further, multiple officials from OSD, the Army, and the manufacturing arsenals told us that while public-private partnerships are a good source of a small amount of revenue, they are not the long-term solution to the arsenals’ ongoing shortage of work. In addition, DOD’s actions to assign work to the manufacturing arsenals have not ensured that they are consistently considered as a source of manufacture. Specifically, the Army’s effort to obtain work from DLA involves make-or-buy analyses that are to be conducted to determine whether to purchase an item from DOD’s industrial facilities, such as the manufacturing arsenals, or from the private sector. However, based on our review of relevant documents and interviews with DOD and Army officials, we found that the Army does not have clear, step-by-step implementing guidance—such as an instruction or guidebook—on how to conduct make-or-buy analyses. Army Regulation 700-90 states that ASA (ALT) is responsible for determining where an item should be procured from and directs Program Managers or Program Executive Officers to conduct the analyses that inform these determinations. The Army’s regulation contains broad descriptions on how to conduct make-or-buy analyses. It notes that the cost estimate for making the item at a manufacturing arsenal should include the direct costs and only those indirect costs that would change as a result of changes in the number of items manufactured. Additionally, while the Army has issued a Cost Benefit Analysis Guide that provides guidance on conducting cost benefit analyses, this guide does not include specific, step-by-step information on the policies, responsibilities, or procedures for conducting make-or-buy analyses. According to AMC officials, no prescriptive guidance on how to conduct these analyses has been issued so there is flexibility in how they are conducted. While the existing regulation may provide flexibility, some manufacturing arsenal officials responsible for conducting these analyses at two of the three arsenals stated that the guidance is not clear and that, as a result, they have requested more detailed, step-by-step guidance to ensure that they conduct these analyses consistently. For example, officials at one manufacturing arsenal told us that in conducting make-or-buy analyses, they are supposed to remove sunk costs when developing their estimates. However, these officials, who are responsible for conducting cost estimates, told us they did not know how to calculate their rates without including the sunk costs. Officials at another manufacturing arsenal told us that the process for conducting make-or-buy analyses is unclear and expressed their opinion that a joint DOD instruction is needed to better implement the process. Federal internal control standards emphasize the importance of establishing detailed policies, procedures, and practices to ensure that such guidance is an integral part of operations. According to Army officials, having clear implementing guidance to help ensure that make-or-buy analyses are consistently conducted would not guarantee that the manufacturing arsenals receive sufficient workload to recover their operating expenses, but it would ensure that they are appropriately considered. In the absence of clear, step-by-step implementing guidance, such as an instruction or guidebook, that outlines how to conduct make-or-buy analyses, DOD cannot provide reasonable assurance that it is appropriately considering the manufacturing arsenals as a potential source of manufacture, thereby potentially limiting the arsenals’ ability to generate revenues. Because DOD’s efforts to assign work to the manufacturing arsenals have not generated sufficient revenue, Congress appropriated $150 million to the arsenals in fiscal year 2014 to recover their operating expenses and maintain competitive rates. These amounts were to be made available in the Industrial Mobilization Capacity subaccount on the condition that the Secretary of the Army assign sufficient workload to the arsenals to sustain their critical manufacturing capabilities and ensure cost efficiency, among other goals. DOD analyzed the financial positions, projected rates, and future workloads of the three manufacturing arsenals and allocated the funding based on the relative need of each. AMC determined that, without this funding, the projected losses at Rock Island and Watervliet Arsenals would have placed those installations in a negative financial position by the end of fiscal year 2014 and that these manufacturing arsenals would need to raise their rates substantially to recover their operating losses. Then, in fiscal year 2015, Congress provided $225 million to the Army’s Working Capital Fund to help maintain competitive rates at the manufacturing arsenals. Of the funds Congress appropriated in fiscal years 2014 and 2015, AMC allocated the funds to the arsenals as follows: Rock Island Arsenal: $110 million in fiscal year 2014 and $135 million in 2015. Watervliet Arsenal: $30 million in fiscal year 2014 and $80 million in fiscal year 2015. Pine Bluff Arsenal: $10 million in fiscal year 2014 and $10 million in fiscal year 2015. According to DOD accounting reports and Army officials, the appropriated funds were directly applied to the manufacturing arsenals’ revenues to offset losses and reduce rates. However, these funds were not accompanied by any work. Officials at each of the three manufacturing arsenals told us that while the funds from Congress were helpful, they would prefer to receive additional work instead, to recover operating expenses, lower rates, and sustain the arsenals’ manufacturing capabilities. DOD is not strategically positioned to sustain the manufacturing arsenals’ critical capabilities because, although it has a strategic plan that covers the manufacturing arsenals, it has not identified fundamental elements, such as time frames, necessary to implement this plan and achieve its goals and objectives. Furthermore, because DOD has not established a process for identifying the manufacturing arsenals’ critical capabilities, developed a method for determining a minimum level of workload to sustain these capabilities at each of the arsenals, and identified and implemented this process and method via guidance—such as a DOD instruction—the department is not positioned to determine the minimum workloads or levels of manufacturing equipment and personnel needed to sustain these capabilities. In 2012, the Army issued its Organic Industrial Base Strategic Plan 2012- 2022 (strategic plan), which identifies several goals and objectives related to the three manufacturing arsenals: Institutionalize Army sustainment functions so that the Army’s priorities inform the manufacturing arsenals’ production schedules. Assess which competencies and capabilities are essential to the organic industrial base. Fund 100 percent of the minimum level of work the manufacturing arsenals must have in order to exercise their critical capabilities sufficiently to sustain them. The strategic plan also outlines the following objectives related to the arsenals: Identify and document critical manufacturing capabilities that the manufacturing arsenals should have. Adjust equipment and personnel at the manufacturing arsenals to sustain these critical manufacturing capabilities. Establish an integrated Human Capital Investment Plan that supports current and future capability requirements. Continue to promote public-private partnerships. However, the Army has not identified other fundamental elements associated with the achievement of the strategic plan’s goals and objectives. Standard practices for project management call for agencies to conceptualize, define, and document specific goals and objectives in their planning processes and to identify the appropriate steps, milestones, time frames, and resources they need to achieve those goals and objectives. The Army’s strategic plan does not contain any of these fundamental elements. Furthermore, Army officials informed us that there are no other documents, such as an implementation plan, that contain these fundamental elements and that they have no plan to document them. They explained that such documentation is not needed, because they fully understand the efforts being taken to implement this strategy, including hosting summits to share ideas on how to increase workload levels at the manufacturing arsenals, working directly with DLA to make the arsenals a DLA source of manufacture, and collaborating with other countries to potentially increase foreign military sales of arsenal goods and services. However, unless it identifies and documents these fundamental elements by including information that would be useful in determining DOD’s progress toward achieving its stated goals and objectives, the department is not strategically positioned to sustain the manufacturing arsenals’ critical capabilities. Achievement of the strategic plan’s goals and objectives is predicated on the identification of the manufacturing arsenals’ critical capabilities. However, these critical capabilities have not been identified and documented, as called for in the 2012 strategic plan. Further, DOD has not developed and implemented an agreed-upon process for determining these capabilities, although such efforts are under way. DOD has previously undertaken efforts to identify the manufacturing arsenals’ critical capabilities, but those earlier efforts had shortcomings. For example, in April and May 2013, each of the three manufacturing arsenals submitted studies to AMC that they had conducted to identify their critical capabilities and the minimum workload they would need in order to sustain those capabilities. These studies were completed in response to the Senate Armed Services Committee’s report accompanying a bill for the National Defense Authorization Act for Fiscal Year 2013, which directed the Secretary of Defense to identify critical manufacturing capabilities and capacities that should be government- owned and -operated, as well as the level of work needed to sustain those capabilities. However, AMC officials responsible for overseeing these studies, as well as officials from OSD and the manufacturing arsenals, told us that the arsenals were not given a standardized, consistent method to follow in identifying their critical capabilities and minimum workloads. Rather, AMC officials told us, they relied on the manufacturing arsenals to each develop their own unique method. A senior OSD official described the resulting process as unsound. Each manufacturing arsenal declared what it believed to be its own critical capabilities in an unstructured way and based its analysis on then-current personnel levels. A senior official at ASA (ALT) expressed a similar opinion, saying that the manufacturing arsenals had labeled everything they were doing at that time, including assembly work, as critical. Nonetheless, the results of each of the three manufacturing arsenals’ studies were consolidated into a single report that listed critical capabilities for each of the three arsenals and the estimated workloads necessary to sustain them; this report was submitted to Congress in August 2013. Recognizing the shortcomings of earlier efforts to identify critical capabilities, ODASD (MPP) commissioned a study in March 2013 to (1) establish a process for identifying critical manufacturing capabilities and (2) develop a method to identify the minimum workloads needed to sustain these capabilities. In mid-August 2015, an ODASD (MPP) official told us that OSD hoped to have this process for identifying critical capabilities completed by December 2015, more than 2 years after the effort had begun. This official explained that the working group focused on this effort did not initially include representation from all of the relevant stakeholders, including representatives from the manufacturing arsenals. The official told us that, as a result, the effort had been temporarily paused so that the working group conducting the study could incorporate the new stakeholders. According to ODASD (MPP) officials, the process being developed to identify critical capabilities and the method for determining the minimum workload needed to sustain them will form the basis of a DOD instruction applicable to all of the military services. ODASD (MPP) officials explained that they have not begun drafting the instruction, and they could not provide an estimated time frame for when the instruction would be issued. They noted that even once the related study is completed, it could take several years to finalize and issue the instruction. ODASD (MPP) officials stated that they expect the development of the instruction to be challenging, given the divergent views within Army leadership, for example on what critical capabilities are needed at the manufacturing arsenals. Until the department completes the study it commissioned in March 2013 and issues its implementing instruction, DOD will continue to lack an agreed-upon process for identifying the manufacturing arsenals’ critical capabilities and a method for determining a minimally-sustaining workload to sustain those capabilities. Because DOD has not identified the manufacturing arsenals’ critical capabilities or determined the minimum levels of workload needed to sustain these capabilities, as called for in the strategic plan, DOD is not positioned to achieve the strategic plan’s other goals and objectives. For example, it cannot determine the amount of resources that would be needed to assign 100 percent of the minimum level of work required to sustain the critical capabilities. Further, it is not able to adjust the equipment and personnel levels at the manufacturing arsenals to levels that would sustain these capabilities. Regarding equipment and personnel levels, there have been some efforts to make adjustments to reduce operating expenses. For example, in mid-2014, Watervliet Arsenal assessed its manufacturing equipment and identified several dozen machines that it could potentially lay away or excess to avoid some operating expenses. Watervliet projected that it could save over $250,000 within the first year as a result of these actions. Officials at Watervliet stated that they were moving forward with these actions. Additionally, in December 2013, AMC conducted an analysis of personnel levels at Rock Island Arsenal and recommended that Rock Island reduce its workforce from its approximately 1,200 personnel to between 500 and 600 personnel. According to AMC officials, this reduction has not been made. However, unless DOD first identifies the manufacturing arsenals’ critical capabilities, it will be unable to determine whether such adjustments of equipment and personnel levels will enable the arsenals to sustain those capabilities. DOD’s September 2014 Report on Manufacturing Arsenal Study met the statutory requirements to address seven different reporting elements. However, we found that additional information and coordination would have made the report more consistent with relevant generally accepted research presentation standards for a defense research study, and therefore would have helped decision makers to identify and evaluate the information presented. DOD submitted its report to Congress in response to the mandate in section 322 of the National Defense Authorization Act for Fiscal Year 2014, which required the Secretary of Defense to conduct a one-time review of the manufacturing arsenals, covering the seven elements specified in the statute. The report was to include the results of reviews of 1. current and expected manufacturing requirements across the military services and Defense Agencies, to identify critical manufacturing competencies and supplies, components, end items, parts, assemblies, and sub-assemblies for which there is no or limited domestic commercial source and which are appropriate for manufacturing within an arsenal owned by the United States in order to support critical manufacturing capabilities; 2. how DOD can more effectively use and manage public-private partnerships to preserve critical industrial capabilities at the manufacturing arsenals for future national security requirements, while providing the Department of the Army with a return on its investment; 3. the effectiveness of the strategy of DOD to assign work to be performed at each of the arsenals and the potential for alternative strategies that could better identify work to be performed at each arsenal; 4. the impact of the rate structure driven by the Department of the Army’s working capital funds on public-private partnerships at each arsenal; 5. the extent to which operations at each arsenal can be streamlined, improved, or enhanced; 6. the effectiveness of the implementation by the Department of the Army of cooperative agreements, authorized at manufacturing arsenals under section 4544 of title 10, United States Code; and 7. mechanisms within DOD for ensuring that appropriate consideration is given to the unique manufacturing capabilities of arsenals for manufacturing requirements of DOD for which there is no or limited domestic commercial capability. Based on our review of the report, we determined that DOD met the statutory requirements, because the report includes each of the seven elements and related content. The 5-page report contains subsections specific to each element, primarily within the context of the cost recovery requirements of the Working Capital Fund and the declining workload experienced by the arsenals. For example, in response to the second element regarding how DOD can more effectively use and manage public-private partnerships, DOD reported that, while it will continue to rely on existing legal authorities for public-private partnerships, more effective use of such partnerships requires the stabilization of arsenal rates and the use of alternate rate structures. The report states the arsenals have limited flexibility to adjust their rates and that the need for the arsenals to charge a fully burdened rate that recovers past operating losses causes them to lose potential opportunities for public-private partnerships. DOD also stated that it can more effectively utilize public- private partnerships by increasing the military departments’ and defense agencies’ knowledge of the arsenals’ capabilities through various forums, such as industry days and DOD maintenance symposiums. Additionally, for the fourth element, regarding the impact of the rate structure driven by the Department of the Army’s Working Capital Funds on the public-private partnerships at each arsenal, DOD reported that the arsenals must recover their full costs from customer charges, since they are working capital-funded activities. DOD explained that some of the costs borne by the arsenals are attributed to capacity required for surge operations and some are for workforce in excess of workload. The report states that, when the full costs are apportioned over a small workload base, the impact is higher rates, which discourages partners and customers from using the arsenals. DOD’s report reiterated that the need for arsenals to charge a fully burdened rate that recovers past operating losses causes them to lose potential opportunities for public-private partnerships. While DOD’s report met the statutory requirements, we determined that DOD could have taken actions that we believe would have made the report more consistent with relevant generally accepted research presentation standards for a defense research study and, therefore, would have made the presented results more useful to decision makers. Specifically, we found that DOD’s presentation of most of the reporting elements could have been more sound, complete, and clear, which would have facilitated decision makers’ evaluation of the information presented. Furthermore, DOD could have coordinated the results of its study with participants and stakeholders—and obtained and considered their comments—before finalizing its report, to better ensure its soundness, completeness, and clarity. Generally accepted research standards for a defense research study define a sound and complete defense research study as one that provides, among other things, timely, complete, and relevant information for the client and stakeholders. Of these, there is a subset of standards for presenting the results from such a study. The extent to which a report’s presentation of results is consistent with these relevant standards is an indication of the ease with which the evidence can be evaluated and of the soundness and completeness of the report and, thus, its usefulness in enabling decision makers to make fully informed decisions. These GAO-developed standards are consistent with current Office of Management and Budget and DOD guidelines on ensuring and maximizing the quality of information disseminated to the public. We determined that the following presentation standards are relevant, given our objectives and the content of DOD’s report: 1. Does the report present an assessment that is well documented and conclusions that are supported by the analyses? 2. Are the report’s conclusions sound and complete? 3. Are the study results presented in a clear manner? 4. Are study participants/stakeholders informed of the study results and recommendations? In applying the first three standards on the presentation of results, which describe the soundness, completeness, and clarity of the information presented, we found that DOD’s report was consistent with those standards for two of the seven reporting elements. First, the report details how rate structures that are driven by the Army Working Capital Fund impact DOD’s ability to maintain public-private partnerships at the manufacturing arsenals. In doing so, it clearly describes how the manufacturing arsenals must charge a rate to their public partners and customers that is determined by the Army Working Capital Fund, and how that rate can increase when work to be performed at the arsenals decreases. According to DOD’s report, higher rates can lead to the loss of current and potential opportunities for public-private partnerships at the manufacturing arsenals. Second, the report discusses the extent to which operations at each manufacturing arsenal could be streamlined, improved, or enhanced. In doing so, the report describes in detail the limited flexibility the manufacturing arsenals have to streamline, improve, or enhance their operations. For example, the report notes that, given that the manufacturing arsenals are restricted in their ability to conduct reductions in force to adjust their personnel levels, they are limited to using hiring freezes and voluntary early retirements or separations to decrease personnel levels in times of decreased workloads. For the remaining five reporting elements, we found that DOD’s September 2014 report is not consistent with the relevant presentation standards for soundness, completeness, and clarity. For example, to address the seventh reporting element—that DOD identify mechanisms for ensuring that the manufacturing arsenals are considered as a source of manufacture—DOD’s report notes that the Secretary of the Army directed AMC to work with DLA to make the Army manufacturing arsenals a source of manufacture for the Army parts within the DLA inventory. DOD’s report, however, does not disclose to what extent this has resulted in the manufacturing arsenals actually being considered as a source and does not discuss any challenges in doing so, such as issues associated with guidance for conducting make-or-buy analyses, discussed earlier in this report. Furthermore, to address the first reporting element—to review the manufacturing arsenals’ current and expected requirements to support their critical capabilities—DOD’s report identified a list of current and expected manufacturing requirements for items that it designated as appropriate for production in one of the manufacturing arsenals, but the report does not provide a clear explanation of how DOD identified these items and does not identify which military services or defense agencies these requirements apply to. Additionally, the report does not disclose that, as previously discussed, DOD has not developed and implemented a process for identifying the manufacturing arsenals’ critical capabilities. Moreover, to address the second reporting element—that DOD discuss how it can more effectively use and manage public-private partnerships— DOD’s report explains how the department is currently using these partnerships but does not clearly explain how any suggested improvements would provide an additional return on investment to the Army or how the use of public-private partnerships would aid in preserving the manufacturing arsenals’ critical capabilities. Table 1 summarizes our assessment of what DOD included in its report that is consistent with relevant defense research presentation standards. For the five reporting elements for which we found that DOD was not consistent with these standards, the table provides examples of additional information that we determined could have been included to make the report more consistent with the relevant presentation standards for soundness, completeness, and clarity. In addition, we determined that DOD’s report was not consistent with the generally accepted research standard that participants or stakeholders be informed of the defense study’s results and recommendations. Officials from a DOD office mentioned in the September 2014 report told us that no one from their office either participated in the study or reviewed the report prior to its publication. They explained, after we provided them a copy of the report, that one of the analyses that was reportedly conducted by their office—and mentioned in DOD’s report—was not conducted in the manner described or for the purposes indicated in the report. They further explained that had they reviewed the information in the report about the analysis their office conducts, such a misstatement would not have occurred. Additionally, officials from relevant DOD components— including the three manufacturing arsenals and their higher headquarters—told us that they had not been given the opportunity to review or comment on the final version of the report before it was issued. For example, when we spoke with officials at the manufacturing arsenals and at some of the headquarters organizations, they told us that they had not seen the issued report until we showed it to them. Moreover, AMC officials told us that the manufacturing arsenals had not been given an opportunity to review the information used to support the report to confirm that the details in the final report were complete. When we discussed the results of our assessment of the September 2014 report with ODASD (MPP) officials who had the lead for developing the report, they disagreed with our assessment related to relevant generally accepted research presentation standards. In our June 2015 meeting, they explained that they had addressed the statutory requirements and that was sufficient, questioning the need to follow the generally accepted defense research presentation standards we determined to be relevant in assessing DOD’s report. We agree that DOD’s report met the statutory requirements by including a discussion of each of the seven reporting elements. However, we also believe that it is appropriate to apply the relevant generally accepted defense research standards for the presentation of results, because consistency with these standards helps to indicate the extent to which the results presented in the report are useful to decision makers. Moreover, as previously mentioned, the relevant generally accepted research presentation standards we used to assess DOD’s report are consistent with Office of Management and Budget guidelines and DOD guidance. ODASD (MPP) officials did not provide any examples where we had overlooked information in the report that our assessment determined could have been included to make it more sound, complete, and clear for use by decision makers. Further, these officials did not disagree with our assessment that the report had not been shared or coordinated with participants and stakeholders. As a result of the decline in demand for materiel, DOD is facing challenges in assigning work to its three manufacturing arsenals. DOD has taken various actions in an effort to assign work to the manufacturing arsenals, but these actions collectively have not resulted in the arsenals generating sufficient revenue to recover their operating expenses. The effectiveness of these actions has been limited in part by the fact that DOD has not developed clear, step-by-step implementing guidance on how to conduct make-or-buy analyses, which would help to ensure that the arsenals are appropriately considered as a source of manufacture. Because the arsenals are generating insufficient revenue, Congress has provided $375 million collectively in the prior and current fiscal years to help recover operating losses and maintain competitive rates. Unless the manufacturing arsenals are able to generate sufficient revenue to recover their operating expenses, it is likely that they will need continued funding or will need to make adjustments to personnel and equipment levels to reduce their operating expenses and maintain competitive rates. DOD is not strategically positioned to sustain the manufacturing arsenals’ critical capabilities. These critical capabilities help ensure that DOD is able to respond to national emergencies and obtain products and services that it could not otherwise acquire from private industry in an economical manner. While there is a strategic plan that covers the manufacturing arsenals and has established related goals and objectives, DOD has not identified or documented fundamental elements, such as time frames and resources, for implementing the plan. In not identifying and documenting these fundamental elements, DOD is inconsistent in applying standard practices for project management and, therefore, lacks information that would be useful in determining whether progress is being made in achieving the plan’s goals and objectives. More importantly, DOD cannot achieve the strategic plan’s goals and objectives until it has identified the manufacturing arsenals’ critical capabilities. After falling short in prior efforts, DOD has an effort under way to develop a process to identify critical capabilities and a method for determining the minimum workload needed to sustain them, but that effort has been delayed. As a result, it is not clear when DOD will be able to act on its intention to develop and issue guidance—such as a DOD instruction—to implement the process and method being developed. Until such an instruction is issued, DOD will continue to lack a process for identifying the manufacturing arsenals’ critical capabilities and will not be positioned to determine the minimum amount of work or the levels of equipment and personnel needed to sustain the arsenals’ capabilities. With the issuance of its September 2014 report on the manufacturing arsenals, DOD met statutory requirements. However, we determined that additional information would have made the report more consistent with relevant generally accepted research presentation standards for a defense study. Additionally, had DOD coordinated its results with participants and stakeholders, they could have provided comments or corrections to misstatements, as needed. Doing so would have enabled DOD to present a more sound, complete, and clear report that not only would have met statutory requirements, but would have been more useful to Congress in its oversight of DOD’s manufacturing arsenals. Because DOD’s report was prepared in response to a one-time, nonrecurring mandate, we are not making any recommendations to amend the report or provide additional detail. To help DOD ensure that it appropriately considers the manufacturing arsenals as a source of manufacture and is strategically positioned to sustain the manufacturing arsenals’ critical capabilities, we recommend that the Secretary of Defense direct The Secretary of the Army to issue clear, step-by-step implementing guidance, such as an instruction or guidebook, on the process for conducting make-or- buy analyses in a consistent manner and identify and document fundamental elements—such as steps, interim milestones, time frames, and resources—for implementing the Army’s Organic Industrial Base Strategic Plan 2012-2022 and The Office of the Deputy Assistant Secretary of Defense for Maintenance Policy and Programs—in coordination with the military services, as appropriate, to complete DOD’s ongoing effort to establish a process for identifying the manufacturing arsenals’ critical capabilities and a method for determining the minimum workload needed to sustain these capabilities and develop and issue guidance, such as a DOD instruction, to implement the process for identifying the manufacturing arsenals’ critical capabilities and the method for determining the minimum workload needed to sustain these capabilities. We provided a draft of this report to DOD for review and comment. In written comments, DOD concurred with all of our recommendations. DOD’s comments are summarized below and reprinted in appendix II. DOD also provided technical comments which we have incorporated into our report as appropriate. In addition to its overall concurrence with our recommendations, DOD stated that it does not agree with the implication that these steps will lead to the provision of sufficient revenue to cover all of the manufacturing arsenals’ expenses. However, we did not state or imply that implementation of these recommendations will increase revenue. Rather, the recommendations’ stated intent is to ensure that DOD appropriately considers the manufacturing arsenals as a source of manufacture and is strategically positioned to sustain their critical capabilities. As explained in our report, until DOD determines the manufacturing arsenals’ critical capabilities, it will not be positioned to determine the minimum amount of work or the levels of equipment and personnel needed to sustain those capabilities. DOD concurred with our recommendation related to issuing implementing guidance on make-or-buy analyses but provided no details on how or when it would issue such guidance. Further, DOD explained that it did not agree with the implication that make-or-buy analyses would necessarily increase revenue provided to arsenals and noted that the process may result in reduced revenue. We did not state or imply that the issuance of implementing guidance on the process for conducting make-or-buy analyses would increase revenue. Rather, we believe that in the absence of clear, step-by-step implementing guidance on how to conduct make-or- buy analyses, DOD cannot provide reasonable assurance that it is appropriately considering the manufacturing arsenals as a potential source of manufacture. As stated in our report, although implementing such guidance would not guarantee that the manufacturing arsenals receive sufficient workload, it would ensure that they are appropriately considered. DOD concurred with our recommendation related to implementing its 2012 strategic plan but provided no details on how or when it would implement the recommendation. DOD also stated that, given the overall constraints placed on the department’s budget under the Budget Control Act of 2011, as amended, it cannot guarantee the availability of any resources that it would identify by implementing this recommendation. Our report discusses how standard practices for project management call for agencies to conceptualize, define, and document specific goals and objectives in their planning processes and to identify the appropriate steps, milestones, time frames, and resources they need to achieve those goals and objectives. However, DOD’s strategic plan does not contain any of these fundamental elements. Unless DOD identifies and documents fundamental elements by including information that would be useful in determining its progress toward achieving its stated goals and objectives, the department will not be strategically positioned to sustain the manufacturing arsenals’ critical capabilities in any budget environment. DOD concurred with our recommendation related to developing a process to identify the arsenals’ critical capabilities and a method to determine the minimum workload needed to sustain those capabilities. DOD stated that its effort to address this recommendation is ongoing, but added it does not agree that developing such a process will result in sufficient revenue to cover arsenal expenses. We did not state or imply that establishing a process for identifying the manufacturing arsenals’ critical capabilities and a method for determining the minimum workload needed to sustain these capabilities would result in sufficient revenue to cover arsenal expenses. Rather, we believe such a process is needed to help ensure that DOD is strategically positioned to sustain the manufacturing arsenals’ critical capabilities and achieve its 2012 strategic plan’s goals and objectives. Although DOD did not specifically comment on our recommendation related to issuing guidance to implement a process for identifying the arsenals’ critical capabilities and a method for determining the minimum workload needed to sustain these capabilities, it commented that it expects to issue an instruction incorporating such a process by the end of fiscal year 2016. We believe that, if fully implemented, these actions should address our recommendations and strategically position the department to sustain the manufacturing arsenals’ critical capabilities. In its comments, DOD also took issue with how we characterized Congress “‘providing’ additional funds” to cover the excess of arsenal expenses over revenue. DOD stated that no additional funds were added to the DOD budget for this shortfall; rather, Congress “redirected funding” from other essential defense missions. We did not state or imply that amounts appropriated for arsenal expenses were additional to DOD’s overall budget authority for fiscal years 2014 and 2015. Our discussion of funding for Industrial Mobilization Capacity and the Arsenal Sustainment Initiative highlighted specific appropriations made by Congress to the Working Capital Fund for the express purposes of covering the costs of unutilized or underutilized plant capacity through the Working Capital Fund’s Industrial Mobilization Capacity subaccount in fiscal year 2014 and maintaining competitive rates at arsenals through the Arsenal Sustainment Initiative in fiscal year 2015. We did not discuss these appropriations in the context of DOD’s overall budget authority, or state or suggest that they were derived from a concurrent increase in budget authority. Finally, DOD did not agree with our assessment of DOD’s September 2014 report on the manufacturing arsenals against four generally accepted research presentation standards, questioning why the presentation standards are more relevant for this particular DOD report to Congress than they are for the many reports submitted each year. We do not believe that the presentation standards we applied are more relevant for the September 2014 report than for any other report DOD submits to Congress. We believe that all reports submitted by federal agencies to Congress should not only comply with applicable statutory reporting requirements, as was the case with the September 2014 report but should also follow applicable generally accepted research standards. As discussed in our methodology, we determined that the following four standards, which focused on the presentation of results, were applicable as DOD did not have a documented method for its study and given the contents of the September 2014 report: the report presents an assessment that is well documented and conclusions that are supported by the analyses; the report’s conclusions are sound and complete; the study results are presented in a clear manner; and study participants/stakeholders are informed of the study results and recommendations. In its comments, DOD did not provide any explanation as to why it believes these standards should not have been applied in our assessment of the September 2014 report. DOD further noted in its comments that the application of the generally accepted research presentation standards would not have altered the September 2014 report or its contents. We disagree. As discussed in the report, we identified multiple examples of additional information that we determined could have been included to make the report more consistent with the relevant presentation standards for soundness, completeness, and clarity. DOD also stated that we are not correct in our assertion that the report was not shared or coordinated with participants and stakeholders, explaining that the September 2014 report was fully staffed with the Army’s Assistant Secretary for Acquisition, Logistics and Technology; the Deputy Chief of Staff for Logistics; and the Army Materiel Command. We do not disagree that the report was coordinated with the organizations identified in DOD’s comments. However, we identified other key stakeholders with whom the report was not coordinated. Most notably, we found that the manufacturing arsenals had not been given an opportunity to review the information used to support the report or to review the final version of the report. Had DOD adhered to relevant generally accepted research presentation standards for a defense study in preparing the September 2014 report, it could have better ensured the report’s soundness, completeness, and clarity and therefore, its usefulness to decision makers. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and the Secretaries of the Army, Air Force, and Navy. The report also is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5741 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. This report assesses (1) actions, if any, that the Department of Defense (DOD) has taken to assign work to the manufacturing arsenals to generate sufficient revenue to recover their operating expenses; (2) the extent to which DOD is strategically positioned to sustain the manufacturing arsenals’ critical capabilities; and (3) the extent to which DOD’s September 2014 report meets the requirements to address the statutory reporting elements and is consistent with relevant generally accepted research presentation standards for a defense research study. To address these reporting objectives, we visited or contacted knowledgeable officials with responsibilities related to arsenal operations from the following organizations: Office of the Under Secretary of Defense for Acquisition, Technology and Logistics Office of the Deputy Assistant Secretary of Defense for Manufacturing Defense Procurement and Acquisition Policy Office of the Assistant Secretary of Defense for Logistics and Materiel Readiness Office of the Principal Deputy Assistant Secretary of Defense for Maintenance Policy and Programs Office of the Deputy Assistant Secretary of Defense for Defense Finance and Accounting Service Defense Manpower Data Center Deputy Assistant Secretary of the Army for Acquisition Policy and Logistics Office of the Assistant Secretary of the Army for Acquisition, Logistics Department of the Army, Headquarters / G4, G8 Army Materiel Command / G1, G3, G4, G8 Joint Munitions Command Life Cycle Management Command Pine Bluff Arsenal TACOM Life Cycle Management Command Rock Island Arsenal Joint Manufacturing and Technology Watervliet Arsenal Joint Manufacturing and Technology Army Program Executive Office Ground Combat Systems Abrams Tank Program Office We also obtained pertinent documents, including DOD directives, instructions, and reports; Army regulations and instructions, memorandums, strategic plans, and other guidance; and information on the organic defense industrial base and each of the three manufacturing arsenals (Pine Bluff, Rock Island, and Watervliet), such as the arsenals’ critical capabilities and current levels of workload. Additionally, we discussed personnel and workload data with subject matter experts at the Defense Manpower Data Center and the Defense Finance and Accounting Service. To provide background information on recent trends in workload at the three manufacturing arsenals, we summarized direct labor hour data compiled by Army Materiel Command officials for civilian and contract personnel from fiscal years 2000 through 2014. We assessed the reliability of the direct labor hour data we obtained from Army Materiel Command through interviews with knowledgeable officials and determined that these data were sufficiently reliable to use in this report for this limited purpose. For our first objective, to assess the actions, if any, that DOD has taken to assign work to the manufacturing arsenals to generate sufficient revenue to recover their operating expenses, we also interviewed DOD officials who were involved in assessing and implementing efforts to improve or enhance operations at the arsenals. We compared existing guidance on the process used to consider manufacturing arsenals as a source of manufacture to federal internal control standards for control activities contained in GAO’s Standards for Internal Control in the Federal Government. For our second objective, to assess the extent to which DOD is strategically positioned to sustain the manufacturing arsenals’ critical capabilities, we interviewed DOD officials who contributed significantly to the department’s current strategy to assign work to be performed at the arsenals. We then compared DOD’s existing strategy for the manufacturing arsenals to standard practices for project management and identified discrepancies. We also reviewed two Army assessments related to the levels of equipment and personnel and determined that one of the assessments described an approach and findings that were reasonable, but we did not assess the accuracy or reliability of the underlying data because doing so was beyond the scope of this review. The other assessment, however, did not contain sufficient information for us to determine if the approach used to calculate its results was reasonable. In the absence of other reliable sources, we limited the use of these three assessments in the report to noting that the arsenals had conducted assessments containing recommendations intended to guide subsequent decision making. For our third objective, to determine the extent to which DOD’s September 2014 report meets the requirements to address the statutory reporting elements and is consistent with relevant generally accepted research presentation standards for a defense research study, we conducted a two-part assessment of DOD’s September 2014 report. First, to assess the extent to which DOD’s September 2014 report meets the statutory requirement to address the seven reporting elements, we compared the report to the elements listed in section 322 of the National Defense Authorization Act for Fiscal Year 2014. For each reporting element, we determined whether DOD’s report met the statutory requirement by including the element and providing related content. Second, we assessed the extent to which DOD’s September 2014 report is consistent with relevant generally accepted research presentation standards for a defense research study. To do so, we determined which generally accepted research presentation standards for a sound, complete, and clear defense research study were relevant to the contents of the report, given our objective. In 2006, we described these standards in a report on DOD transportation capabilities. In this 2006 report, we reviewed research literature and DOD guidance and identified frequently occurring, generally accepted research standards that are relevant for defense studies, including those related to the presentation of results. These GAO-developed generally accepted research presentation standards are consistent with Office of Management and Budget guidelines and DOD guidance on ensuring and maximizing the quality of information disseminated by federal agencies to the public. We identified 36 generally accepted research standards for a defense research study in the areas of design, execution, and presentation of results. We determined that these standards are still current and relevant for the purposes of this report. Because DOD did not have a documented method for its study, we did not assess DOD’s September 2014 report against the generally accepted research standards for design (14 standards) and execution of the design (15 standards). Consequently, we confined our objective and assessment to the subset of 7 standards for presenting the results of a defense study. Of these 7 standards related to the presentation of results, we determined, based on the content of DOD’s report, that the following 4 were relevant: 1. Does the report present an assessment that is well documented and conclusions that are supported by the analyses? 2. Are the report’s conclusions sound and complete? 3. Are the study results presented in a clear manner? 4. Are study participants/stakeholders informed of the study results and recommendations? We determined that the remaining three research presentation standards were not relevant, given our review’s objectives and based on the content of DOD’s report. For example, since we separately assessed the extent to which DOD’s report met the statutory requirements to address the statutory reporting elements, we did not assess whether DOD’s report addressed its objectives in the context of the generally accepted defense research presentation standards, as this would have been duplicative. Also, because the report did not include recommendations or provide options, we did not apply the generally accepted defense research presentation standards on whether recommendations were supported by analyses or whether a realistic range of options was provided. After determining the relevant presentation standards, we then compared the contents of DOD’s September 2014 report and available supporting documentation—such as DOD policy, guidance, assessments, and briefings—to the 4 relevant research presentation standards. The extent to which the report’s presentation of results is consistent with these relevant standards is an indication of the ease with which the evidence can be evaluated and of the soundness and completeness of the report and, thus, its usefulness in enabling decision makers to make fully informed decisions. We considered DOD’s response to a statutory reporting element to be consistent with relevant generally accepted defense research presentation standards when the report explicitly addressed (e.g., included meaningful facts, figures, or clearly discussed) all aspects of the element and included sufficient specificity and detailed support. We considered DOD’s response to a reporting element to be inconsistent with these standards when the report neither explicitly addressed all aspects of the element nor included sufficient specificity and detailed support. In such cases, we provided examples of additional information that, although not statutorily required, we believe would have made the report more consistent with the 4 relevant generally accepted research presentation standards. In addition, we discussed the results of our assessment of the September 2014 report with ODASD (MPP) officials—who had the lead for developing the report—and obtained their perspectives regarding the approach they used to develop it, including their rationale for (1) choosing the level of detail they provided for particular reporting elements in the report and (2) not sharing the report with participants and stakeholders before it was issued. We conducted this performance audit from June 2014 to November 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, the following staff members made key contributions to this report: Larry J. Junek, Assistant Director; Yecenia C. Camarillo, Timothy J. Carr, Patricia Farrell Donahue, Cynthia L. Grant, Joanne Landesman, Amie Lesser, Felicia M. Lopez, Oscar W. Mardis, Sabrina C. Streagle, and Matthew R. Young. Regional Missile Defense: DOD’s 2014 Report Generally Addressed Required Reporting Elements, but Excluded Additional Key Details. GAO-15-32. Washington, D.C.: December 1, 2014. Army Industrial Operations: Budgeting and Management of Carryover Could Be Improved. GAO-13-499. Washington, D.C.: June 27, 2013. Defense Logistics: Oversight and a Coordinated Strategy Needed to Implement the Army Workload and Performance System. GAO-11-566R. Washington, D.C.: July 14, 2011. Defense Infrastructure: The Army Needs to Establish Priorities, Goals, and Performance Measures for Its Arsenal Support Program Initiative. GAO-10-167R. Washington, D.C.: November 5, 2009. Depot Maintenance: Improved Strategic Planning Needed to Ensure That Army and Marine Corps Depots Can Meet Future Maintenance Requirements. GAO-09-865. Washington, D.C.: September 17, 2009. Depot Maintenance: DOD’s Report to Congress on Its Public-Private Partnerships at Its Centers of Industrial and Technical Excellence (CITEs) Is Not Complete and Additional Information Would Be Useful. GAO-08-902R. Washington, D.C.: July 1, 2008. Defense Transportation: Study Limitations Raise Questions about the Adequacy and Completeness of the Mobility Capabilities Study and Report. GAO-06-938. Washington, D.C.: September 20, 2006 Depot Maintenance: Public-Private Partnerships Have Increased, but Long-Term Growth and Results Are Uncertain. GAO-03-423. Washington, D.C.: April 10, 2003. Army Industrial Facilities: Workforce Requirements and Related Issues Affecting Depots and Arsenals. GAO/NSIAD-99-31. Washington, D.C.: November 30, 1998.
DOD's three manufacturing arsenals provide manufacturing, supply, and technical support services for the military services and allies during national emergencies and contingency operations. The Fiscal Year 2014 NDAA required DOD to report to Congress on its arsenals and included a provision for GAO to review DOD's report. This report assesses (1) actions DOD has taken to assign work to the manufacturing arsenals to generate sufficient revenue to recover their operating expenses, (2) the extent to which DOD is strategically positioned to sustain the manufacturing arsenals' critical capabilities, and (3) the extent to which DOD's September 2014 report addresses statutory reporting elements and is consistent with relevant research presentation standards for a defense research study. To conduct this review, GAO analyzed documentation, visited the arsenals, and interviewed relevant DOD officials. GAO assessed DOD's September 2014 report against the statutory elements and generally accepted research standards. Since 2012, the Department of Defense (DOD) has taken various actions to assign work to its three manufacturing arsenals—Pine Bluff Arsenal, Rock Island Arsenal Joint Manufacturing and Technology Center, and Watervliet Arsenal Joint Manufacturing and Technology Center—in an attempt to generate sufficient revenue to recover operating expenses following a significant decline in demand for materiel, as well as to maintain manufacturing skills to sustain readiness. For example, the Army directed acquisition programs to assign work to the arsenals consistent with the arsenals' capabilities. While these actions have increased revenue, the increases have been small relative to operating expenses. Further, DOD may not always appropriately consider the arsenals as a source of manufacture, because it has not developed clear, step-by-step implementing guidance on conducting make-or-buy analyses to determine whether to purchase items from an arsenal or the private sector, which potentially limits the arsenals' ability to generate revenue. Because DOD's actions as of September 2014 did not generate sufficient revenue, Congress provided $375 million collectively in fiscal years 2014 and 2015 to help recover the arsenals' operating expenses. DOD is not strategically positioned to sustain the manufacturing arsenals' critical capabilities, as it has not identified fundamental elements for implementing its strategic plan or identified these capabilities. Such capabilities help ensure that DOD can respond to emergencies and obtain products and services it could not otherwise acquire from private industry in an economical manner. DOD has a strategic plan that includes goals and objectives related to sustaining the arsenals' critical capabilities; however, it has not identified fundamental elements, such as milestones and resources, needed to implement the plan. As a result, DOD lacks information that would be useful in determining progress in achieving the plan's stated goals and objectives for the arsenals. Moreover, DOD's past efforts to identify the arsenals' critical capabilities had shortcomings, such as each arsenal using a unique method to do so. DOD has an effort under way to develop a process for identifying these critical capabilities and determining a minimum level of workload needed to sustain them, but this effort has been delayed to allow for coordination with stakeholders. Until such a process is developed and implemented, for example through an instruction, DOD is not positioned to determine the minimum workload levels needed or to appropriately adjust the arsenals' equipment and personnel level to sustain these capabilities. DOD's September 2014 Report on Army Manufacturing Arsenal Study met the statutory requirements to address seven reporting elements within the National Defense Authorization Act (NDAA) for Fiscal Year 2014. However, GAO found that additional information would have made the report more consistent with relevant generally accepted research presentation standards for a defense research study and helped decision makers to identify and evaluate information presented in the report. For example, DOD did not disclose that it has not developed a process for identifying the arsenals' critical capabilities. Also, had stakeholders seen the report before it was issued, as called for by the standards, they would have been informed of its results and could have provided comments, as needed, to allow DOD to present a more sound, complete, and clear report. GAO recommends that DOD issue implementing guidance for make-or-buy analyses; identify fundamental elements for implementing its strategic plan; and develop and implement its process for identifying critical capabilities and the minimum workload level needed to sustain them. DOD concurred with the recommendations but disagreed with some statements in the report. GAO believes the statements are accurate, as discussed in the report.
Titles XVIII and XIX of the Social Security Act, as amended, establish minimum requirements nursing homes must meet to participate in the Medicare and Medicaid programs, respectively, with key legislative provisions enacted below. The Omnibus Budget Reconciliation Act of 1987 (OBRA ‘87) included wide-ranging reforms. For example, the law revised the care requirements providers must meet in order to participate in the Medicare or Medicaid programs, modified the survey process, introduced additional enforcement actions, and required nursing homes to periodically assess the health of nursing home residents. OBRA ‘87 is considered largely responsible for the quality environment under which nursing homes operate. In 2010, Title VI of the Patient Protection and Affordable Care Act (PPACA) added additional federal and state oversight and enforcement requirements. Specifically, PPACA requires CMS to establish a national system to collect and report payroll data on nurse staffing hours and develop a standardized complaint form. It also requires states to establish a complaint resolution process. The Protecting Access to Medicare Act of 2014 (PAMA) requires CMS to establish a value-based purchasing program, which will increase or reduce Medicare payments to nursing homes based on an assessment of their performance against quality measures related to rates of hospital readmissions. Under this program, lower- performing nursing homes will receive lower incentive payments compared to better-performing peers, or they may receive a reduction to their Medicare payment rate. CMS is required to implement the program starting in fiscal year 2019. Finally, the Improving Medicare Post-Acute Care Transformation Act of 2014 (the IMPACT Act) requires the standardization of certain types of Medicare data across multiple health care settings, including long term care hospitals, home health agencies, inpatient rehabilitation facilities, and nursing homes. For example, the IMPACT Act requires the reporting to CMS of standardized patient assessment data so that information can be used to help facilitate coordinated care and improve Medicare beneficiary outcomes. Oversight of nursing homes is a shared federal-state responsibility, with specific activities occurring at the national, regional, and state levels performed by the entities listed below. CMS central office. At the national level, CMS central office oversees the federal quality standards nursing homes must meet to participate in the Medicare and Medicaid programs. The office also establishes the responsibilities of CMS’s regional offices and state survey agencies in ensuring that federal quality standards for nursing homes are met. For example, the office issues guidance on how regional and state entities should assess compliance with federal nursing home standards. CMS regional offices. CMS’s 10 regional offices oversee state activities and report back to CMS central office the results of their efforts. Specifically, each year regional offices are required to conduct federal monitoring surveys in at least five percent of each state’s nursing homes surveyed by the state to assess the adequacy of surveys conducted by state survey agencies. Regional offices also use the State Performance Standards System to evaluate state surveyors’ performance on factors such as the frequency and quality of state surveys. State survey agencies. Under agreement with CMS, a state survey agency in each state assesses whether nursing homes meet CMS’s standards, allowing them to participate in the Medicare and Medicaid programs. State survey agencies assess nursing homes using standard surveys and the statewide average between standard surveys may not exceed one year. State survey agencies also conduct complaint investigations as needed. These investigations generally focus on a specific allegation regarding resident care or safety made by residents, families, ombudsmen, or others. CMS collects data on nursing home quality through a number of sources, including annual standard surveys and complaint investigations, as well as other sources such as staffing data and clinical quality measures. The four key sources that we use in this report are described below. Standard surveys. By law, every nursing home receiving Medicare or Medicaid payment must undergo a standard survey not less frequently than once every 15 months, with a statewide average frequency of once every 12 months. During a standard survey, teams of state surveyors conduct a comprehensive on-site evaluation of compliance with federal quality standards. In 2005, CMS launched a new survey process called the Quality Indicator Survey (QIS), designed to improve the accuracy and consistency of standard surveys and the documentation of deficiencies. Though the QIS is similar to the traditional survey processes used for standard surveys, the QIS is electronic rather than paper-based and draws on a random sample of residents for closer analysis, as opposed to a sample hand-picked by the surveyor. As of late 2014, 23 states had transitioned completely to QIS, while 3 states were using a mixture of QIS and traditional surveys. Deficiencies in nursing home care identified during standard surveys are classified into 1 of 12 categories, each designated with a different letter, according to scope—the number of residents potentially affected—and severity—the potential for or occurrence of harm to residents. (See table 1.) For most deficiencies, a home is required to prepare a plan of correction, and, depending on the severity of the deficiency, surveyors may conduct a revisit to ensure that the nursing home has implemented its plan and corrected the deficiency. The scope and severity of a deficiency determine the enforcement actions— such as requiring training for staff, imposing monetary fines, temporary management changes, or termination from the Medicare and Medicaid programs—that CMS may impose on a nursing home. Complaint investigations. Nursing homes are also surveyed on an as-needed basis with complaint investigations. Complaints can be filed with state survey agencies by residents, families, ombudsmen, or others acting on a resident’s behalf. During a complaint investigation, state surveyors conduct a focused evaluation of the nursing home’s compliance with a specific federal quality standard. CMS sets guidelines state survey agencies should follow when recording, investigating, and resolving complaints. Staffing data. Nurse staffing levels are considered a key component of nursing home quality. Higher nurse staffing levels—particularly registered nurse staffing levels—are typically linked with higher quality nursing home care. CMS currently tracks nurse staffing data in nursing homes. Clinical quality measures. Nursing homes are required to provide data on certain clinical quality measures—such as pressure ulcers— for all residents to CMS. CMS currently tracks data for 18 clinical quality measures. Nursing homes with consistently poor performance can be selected for the Special Focus Facility (SFF) program, which requires more frequent surveys. To select nursing homes for the SFF program, CMS scores the relative performance of nursing homes and identifies the poorest performing homes in each state as candidates. State survey agencies then work with CMS to choose some of the candidates to participate; homes that are selected receive more intensive oversight, including more frequent surveys. According to CMS guidance, SFF nursing homes that fail to significantly improve after three standard surveys, or about 18 months, may be involuntarily terminated from Medicare and Medicaid. Originally created by CMS in 1998, the SFF program is now statutorily required under PPACA; CMS is now mandated to conduct its SFF program for homes that have “substantially failed” to meet applicable requirements of the Social Security Act, and must conduct surveys of each facility in the program no less than once every six months. CMS publicly reports a summary of each nursing home’s quality data on its Nursing Home Compare website using a five-star quality rating. The Five-Star Quality Rating System assigns each nursing home an overall rating and three component ratings—surveys (standard and complaint), staffing, and quality measures—based on the extent to which the nursing home meets CMS’s quality standards and other measures. CMS also works to influence nursing home quality through specific quality improvement efforts—such as the agency’s effort to improve dementia care—and through Quality Improvement Organizations (QIOs). CMS contracts with QIOs to help nursing homes address quality problems such as pressure ulcers. Nursing homes’ participation in QIO efforts is voluntary. In recent years, trends in four key sets of data that give insight into nursing home quality show mixed results. Specifically, one of the four data sets suggests that consumers’ concerns over nursing home quality have increased, which may indicate a potential decrease in quality, while the other three sets of data may indicate potential improvement in nursing home quality. However, data issues complicate the ability to assess trends in nursing home quality over time. Nationally, in recent years, one of four data sets—number of consumer complaints—demonstrated a potential decrease in nursing home quality, while the other three data sets—serious deficiencies cited on standard surveys, staffing data, and selected clinical quality measures— demonstrated potential quality improvement. Consumer complaints: From 2005 through 2014, the average number of consumer complaints reported per nursing home increased nationally from 3.2 to 3.9, a 21 percent increase over the 10-year period. After an initial increase, the number of complaints decreased from 2008 through 2011 and then again increased through 2014. (See fig. 1.) Specifically, 52,411 complaints were reported in 2005 and 61,466 complaints were reported in 2014. At the state level, 30 states had increases in the number of complaints per home, with increases of more than 50 percent in 11 of those states, and 21 states had decreases in the number of complaints per home, with decreases of more than 50 percent in 4 of those states. (See Appendix II for data for all states.) Deficiencies cited on standard surveys: From 2005 through 2014, the number of serious deficiencies—deficiencies that at a minimum caused harm to the resident—cited per nursing home surveyed decreased nationally from 0.35 to 0.21, a 41 percent decline over the 10-year period. (See fig. 2.) Specifically, 4,840 serious deficiencies were cited during surveys for 13,800 nursing homes in 2005, and 2,660 serious deficiencies were cited during surveys for 12,759 nursing homes in 2014. At the state level, we also found a decreasing trend in 36 of the states, and an increasing trend in the remaining 15 states. Nurse staffing: From 2009 through 2014, the average total nurse hours per resident per day—a measure of registered nurse, licensed practical nurse, and nurse assistant hours—increased nationally from 4.2 to 4.6, a 9.0 percent increase over the 6-year period. (See fig. 3.) In addition, the average registered nurse hours per resident per day also increased over the same time period from 0.5 to 0.8, a 51.2 percent increase. Furthermore, the average total nurse hours per resident per day increased in all but one state, and the average registered nurse hours per resident per day increased in all states. Studies suggest that higher levels of nurse staffing—particularly registered nurse staffing—can result in higher quality of nursing home care. Selected quality measures: From 2011 through 2014, nationwide nursing homes’ scores on all eight of our selected quality measures improved, at least somewhat, by showing decreases in the number of reported quality problems, such as falls resulting in major injury. The rate of decline varied greatly by quality measure. For example, the percentage of long-stay residents with too much weight loss decreased 1.3 percent over the 4-year period, while the percentage of short-stay residents with new or worsening pressure ulcers decreased 52.2 percent. (See fig. 4.) Similar trends were seen at the state level for most of the quality measures, although two of the quality measures—long-stay residents with too much weight loss and long-stay residents experiencing one or more falls with major injury—had more state-level differences in trends. In our analysis we also attempted to identify trends across the four data sets at the nursing home level. Specifically, we examined the data to determine whether there were nursing homes that consistently performed poorly across the four data sets over the time periods we reviewed. We identified 416 homes nationwide with consistently poor performance. These homes were located in 36 states; the remaining 15 states did not have any of the consistently poorly performing homes. Of the 416 homes, 71 (17 percent) were included in the SFF program at some point between 2005 and 2014. The number of consistently poorly performing homes is greater than the number of SFFs allotted in 2015—416 homes and 85 homes, respectively. As will be discussed, the number of nursing homes included in the SFF program is affected by budget resources, according to CMS. We also attempted to identify commonalities among homes that consistently performed poorly compared to homes that performed well across the four data sets and found that the poorest performing homes were more likely to be for-profit or large homes (greater than 100 beds) compared to homes that performed well; our analysis did not reveal a link between performance and urban or rural location. CMS’s ability to use available data to assess nursing home quality trends is complicated by various issues with these data. Specifically, each of the four key sets of nursing home data we analyzed have issues that make it difficult to determine whether observed trends reflect actual changes in quality, data issues, or a combination of both. (See table 2 for examples of these issues). Under federal internal control standards, agencies should monitor performance data to assess the quality of performance over time, and CMS’s ability to do so is hindered by these data issues. Furthermore, according to GPRA leading practices identified by GAO, agencies should ensure that data are complete, accurate, and consistent enough to document performance and support decision making. In the discussion that follows table 2, we describe in more detail the data issues that exist in each of the four key data sets CMS uses to assess the quality of nursing home care. Consumer complaints: Although the average number of consumer complaints reported per nursing home increased between 2005 and 2014, it is unclear to what extent this can be attributed to a change in quality or to state variation in the recording of complaints. State survey agency officials from the states we interviewed with dramatic increases in the average number of consumer complaints per nursing home over the 10-year period—California and Michigan—both explained that changes in how they recorded complaints into CMS’s complaint tracking system could in part account for the jump in reported complaints. In addition, officials at one state survey agency explained that the increase in complaints could also reflect state-level efforts to provide consumers with more user-friendly options for filing complaints, such as via email. In April 2011, we found differences in how states record and track complaints and made recommendations to CMS to clarify guidance to states. CMS concurred with the recommendations. As of July 2015, CMS had not fully addressed these recommendations; however, the agency had taken some steps. For example, CMS officials reported that the agency was in the early stages of a planned multi-year review of its business practices, including those related to nursing home complaint investigations, and would provide clarification to states, as needed. Also in 2011, CMS created a standardized complaint form, as required by PPACA, and made it available to states and consumers on its website. Use of the form is voluntary, but it provides consistent information to consumers wishing to file complaints and facilitates their ability to compose and file complaints with appropriate supporting information. Deficiencies cited on standard surveys: Although the decline in the number of serious deficiencies cited on standard surveys between 2005 and 2014 may indicate an improvement in quality, it may also be attributed to inconsistencies in measurement. One reason these measurement inconsistencies occur is the use of both traditional paper- based surveys and QIS electronic surveys, which, for example, have different methodologies for selecting residents for closer analysis during the survey. This use of multiple survey types complicates the ability to compare the results of standard surveys nationally. As of late 2014, 23 states used QIS surveys, 25 states used traditional, and 3 states used both. An internal CMS review that analyzed survey data from 2012 to 2014 found that states using traditional surveys cited a slightly higher rate of severe deficiencies than states using the QIS methodology. Some regional offices and state survey agencies we spoke with noted that QIS results in fewer deficiencies cited, especially for more serious deficiencies and deficiencies related to quality of care. As a result, the decreasing trend of serious deficiencies cited on standard surveys could be the result of an expanding use of QIS surveys over the same time period, rather than an improvement in the quality of nursing homes. Officials at one state survey agency suggested that this change in the number of deficiencies cited on QIS surveys could be attributed to the way that the QIS process guides surveyors through a structured investigation. Another reason for measurement inconsistencies is that state survey agencies face challenges in completing standard surveys, particularly in states where there are less experienced surveyors or surveyors with very heavy workloads, according to CMS and state survey agency officials. CMS officials said these challenges led to reduced state survey agency capacity to conduct surveys, which could contribute to the decrease in the number of deficiencies cited on standard surveys. According to CMS officials, the recession had the significant and lasting effect of reducing some state survey agencies’ ability to complete high quality standard surveys, in part because it caused them to rely on smaller and less experienced workforces to conduct surveys. Officials from one of the state survey agencies we interviewed said an increasingly heavy survey workload distributed among a limited number of surveyors could have contributed to the decrease in deficiencies cited on standard surveys in that state. In addition, CMS officials found that the number of hours surveyors spent completing standard surveys has increased as the number of deficiencies cited has decreased, which they said suggests that state survey agencies are relying on newer, less experienced staff to conduct surveys. Finally, in 2012 and 2013, CMS central office notified two state survey agencies that their performance was persistently substandard, and that if the state survey agencies did not improve, then CMS may terminate its agreement with them to oversee nursing home quality in their states. CMS has taken some steps to address the inconsistencies in measurement for deficiencies cited on standard surveys, and, according to CMS officials, continues to work on addressing inconsistencies. Regarding the different survey methodologies, CMS suspended further implementation of QIS in 2012 to address issues such as deficiency patterns, software compatibility, the time required to complete QIS, and surveyor training. States already using QIS continued to do so, but other states continued to do traditional paper-based surveys. In May 2015 CMS acknowledged the challenges created by operating two survey types. CMS officials told us they plan to develop a hybrid model of the QIS and traditional surveys, with the long-term goal of moving all states to this hybrid model. However, CMS officials said dates for developing and implementing the new hybrid model have not been set. CMS officials also commented on the challenges faced by state survey agencies in completing standard surveys, and have documented that some level of variation across states may always exist, but that its systems, such as national training and state performance standards, are intended to improve consistency and limit the variation. Information gathered from the five states we interviewed suggests how some of the data issues for complaints and deficiencies may be affecting the trends in quality data within these states. Specifically, figure 5 below illustrates this potential effect on the trends in the number of consumer complaints reported and the number of serious deficiencies cited on standard surveys. Nurse staffing: Although CMS data show that the average total nurse hours per resident day increased from 2009 through 2014, CMS does not have assurances that these data are accurate. CMS uses data on nurse staffing hours that are self-reported by the nursing homes, but the agency does not regularly audit these data to ensure their accuracy. CMS has conducted little auditing of staffing data outside of when state survey agency surveyors are on-site for inspections, and as a result may be less likely to identify intentional or unintentional inaccuracies in the self- reported data. Many of the regional office and state survey agency officials we spoke with expressed concern over the self-reported nature of these data, noting that it may be easy to misrepresent nurse staff hours. For instance, one state survey agency stated that nursing home residents would sometimes tell surveyors that the high numbers of staff on site during the survey were not normally present and other regional office and state survey agency officials noted that some homes will “staff up” when expecting a standard survey in order to make their staffing levels look better. Although provisions in PPACA required nursing homes to submit staffing information based on payroll and other verifiable and auditable data in a uniform format by March 2012, CMS did not develop a system to begin collecting data by that date. According to CMS officials, CMS did not receive funding to develop the electronic payroll-based data system until the IMPACT Act, enacted in October 2014, provided the necessary multi- year funding. In April 2015 CMS issued a memo outlining a plan to begin collecting staffing data through its payroll-based system on a voluntary basis beginning October 2015 and on a mandatory basis beginning July 2016. In August 2015, CMS issued a final rule confirming this timetable for implementation. According to CMS, the new payroll-based staffing data system will allow homes to directly upload payroll data or to manually enter the required information. CMS indicated that the system will allow staffing and census information to be collected on a regular and more frequent basis than under the previous method. In addition, CMS expects the system to be auditable to check accuracy. However, as of August 2015, CMS had not developed an audit plan and said that it was too soon in the implementation of the new system to do so. While updating the method for collecting staffing data could improve data quality, it is still necessary to audit the data to ensure accuracy. Selected quality measures: Although nursing homes generally improved their performance on the eight selected quality measures we reviewed, it is unclear to what extent this can be attributed to a change in quality or possible inaccuracies in self-reported data. As previously noted, these improvements indicate a reduction in reported quality problems at nursing homes from 2011 through 2014. However, like the nurse staffing data used by CMS, data on nursing homes’ performance on these measures are self-reported by nursing homes, and until 2014 CMS conducted little to no auditing of these data to ensure their accuracy. As a result, CMS has no assurance that nursing homes’ reported performance on these measures are accurate improvements. Some regional office and state survey agency officials told us that public reporting may provide an incentive for nursing homes to make quality improvements on these measures. However, some officials noted that nursing homes may change how they collect and report data on the measures, leading to improvements in measures without corresponding improvements in actual quality. CMS has begun taking steps to help mitigate the problem with self- reported data by starting to audit the data through focused surveys. For the surveys, CMS selected a sample of nursing homes in each state for state survey agency surveyors to evaluate whether the self-reported quality data matches the residents’ medical records. CMS guidance states that data inaccuracies found during the focused surveys can result in deficiency citations to the nursing homes. These new surveys were piloted in 2014 for a sample of five homes in each of the five states and the pilot found some inconsistencies between self-reported data and residents’ medical records. In 2015, CMS expanded the focused surveys to include some homes in each state. According to agency officials, the 2015 focused surveys will be completed by the end of the fiscal year. CMS officials stated that they intend to continue the focused surveys nationwide in 2016. The agency did not state firm plans after 2016, so it is uncertain whether the necessary auditing will continue. Collectively, these data issues have broader implications related to nursing home quality trends, including potential effects on the quality benchmarks CMS sets, consumers’ decisions about which nursing home to select, and Medicare payments to the homes. Specifically, CMS established benchmarks for some of its quality data through its Five-Star Rating System, which indicates the specific staffing levels and quality measure scores a home needs to receive each star rating. In addition, consumers can use the Five-Star ratings to help determine which nursing home to use. Therefore, underlying problems with the data may affect the benchmarks a nursing home uses to assess its quality performance, the ratings a home receives, and the home a consumer selects. Furthermore, data used by CMS to assess quality measures are also used when determining Medicare payments to nursing homes, so data issues—and CMS’s internal controls related to the data—could affect the accuracy of payments. Moreover, the use of quality data for payment purposes will expand in fiscal year 2019 when a nursing home value-based purchasing program will be implemented, which will increase or reduce Medicare payments to nursing homes based on certain quality measures. In recent years CMS has made numerous modifications to its nursing home oversight activities. Some of these modifications expanded or added new oversight activities. For example, as previously described, CMS has introduced, evaluated, and, ultimately, suspended additional implementation of the QIS survey methodology to additional states; begun implementing the PPACA requirement to collect and report data on nurse staffing hours; and begun implementing a process for auditing quality measure data. In addition, CMS has also expanded the number of tools available to state surveyors when investigating medication-related adverse events, increased the amount of nursing home quality data available to the public, and created new trainings for surveyors on unnecessary medication usage. (A summary of key oversight modifications CMS has made can be found in Appendix III.) Other modifications have reduced existing oversight activities. For example, CMS has made modifications to the federal monitoring survey program and the Special Focus Facility program. Federal monitoring surveys: CMS has reduced the scope of the federal monitoring surveys regional offices use to evaluate state surveyors’ skills in assessing nursing home quality. CMS requires regional offices to complete federal monitoring surveys in at least 5 percent of nursing homes surveyed by the state each year. Before 2013, CMS required that 80 percent of these federal monitoring surveys be standard surveys—the most comprehensive type—which cover a broad range of quality issues within a nursing home. The remaining 20 percent of surveys were permitted to be either revisit or complaint surveys, which are more narrow in scope. These surveys focus on a particular deficiency cited on a previous survey or a specific care issue for which a complaint was reported, respectively, and are also less-resource intensive as they take less surveyor time to complete than standard surveys. Starting in 2013, CMS required fewer federal monitoring surveys to be standard surveys and allowed more monitoring surveys to be revisits and complaint investigations. Special Focus Facilities: CMS has reduced the number of nursing homes participating in the SFF program. Nursing homes placed in the SFF program receive additional oversight because of the homes’ history of poor performance. For example, instead of being surveyed at least once every 15 months, SFF homes are surveyed at least once every 6 months. If homes do not improve the quality of their care, CMS can terminate their participation in Medicare and Medicaid. In 2013, CMS began to reduce the number of homes in the program by instructing states to terminate homes that had been in the program for 18 months without improvement and not to select replacements for these homes or homes that left the program by improving their performance. As we have previously reported, between 2013 and 2014, the number of nursing homes in the SFF program dropped by more than half—from 152 to 62. In 2014, CMS began the process of re-building the number of facilities in the SFF program; however, according to CMS officials, the process will be slow (as of July 2015 there were 85 SFF homes). According to CMS officials, these reductions in the scope of CMS’s nursing home oversight activities were made in order to help the agency meet its increasing responsibilities with its limited resources. Specifically, CMS officials said that increasing oversight responsibilities, such as those required by PPACA, and a limited number of staff and financial resources at the central, regional, and state levels required the agency to evaluate its activities and reduce the scope of some activities. For example, CMS officials noted that reductions to the SFF program were made, specifically, as a result of the decrease in CMS’s budget under the Budget Control Act of 2011. The effect of CMS’s modifications in nursing home oversight activities is uncertain but could potentially be significant, especially because the modifications included reductions to activities that CMS considers essential to oversight. For example, by reducing the scope of federal monitoring surveys, CMS may be decreasing its ability to monitor state survey agencies—which is essential because they are one of CMS’s primary tools for assessing nursing home quality, and a lack of effective state oversight could, for example, lead to understatement of care problems. Similarly, by reducing the number of nursing homes in the SFF program, CMS may be limiting its ability to monitor nursing homes with poor performance. As previously noted, we found—both in our analysis for this report and in a prior report—that the number of homes with poor performance exceeds the number of homes included in the SFF program; a difference that is made even greater with the reduction to the SFF program. CMS officials said a variety of factors, including a review of statutory requirements, were considered prior to making modifications; however, the agency is not monitoring how the modifications might affect CMS’s ability to assess nursing home quality. Therefore, the agency is not able to determine whether the modifications are the most effective use of its limited resources for assessing nursing home quality. Under federal internal control standards, ongoing monitoring should occur in the course of normal program operations. When discussing the potential effects of the modifications, CMS officials acknowledged the potential for adverse impacts on their ability to oversee nursing home quality. Just as CMS’s central office has made modifications to its nursing home oversight activities, regional offices and state survey agencies have made modifications to some of their own nursing home oversight activities— both expansions and reductions. For example, state survey agency officials we interviewed from one of the states indicated that partly because of resource constraints, the state had reduced the number of standard surveys until the frequency between surveys for many nursing homes reached 36 months—instead of the required frequency of once every 15 months. Also, state survey agency officials from another state said that in part due to political changes at the state level their state survey agency modified its regulatory philosophy towards nursing homes; in speaking about this shift officials from the state survey agency noted that the modification resulted in state survey agency surveyors emphasizing more of a partner role with nursing homes rather than acting as a strict regulator. Other officials described modifications that could be helpful to share with other regional offices and state survey agencies. For example, officials from one regional office described how they share staff with other regional offices in order to complete oversight activities—such as federal monitoring surveys—within required timelines. In addition, these regional office officials develop an annual report that includes oversight data for their region, which could be a useful template for other regions, particularly as officials from another regional office expressed the need for greater data analysis in their office. Given the tight resource environment, regional offices and state survey agencies could benefit from adopting strategies that other agencies have used to successfully meet their nursing home oversight requirements in an efficient and effective manner. However, while CMS’s central office has some ways of collecting information from regional offices and state survey agencies, the agency does not have a national approach for routinely collecting such information on modifications to nursing home oversight activities— whether positive or negative. CMS’s state performance standard system, which is intended to identify whether a state survey agency is generally compliant with CMS’s oversight requirements, may elicit isolated information on negative modifications when asking state survey agencies to explain poor performance. However, as currently designed, it does not routinely collect information on state survey agency modifications that could negatively impact nursing home oversight or provide examples of best practices. As a result, CMS does not have enough information to respond to state survey agency modifications—and make adjustments where needed—in an ongoing or timely manner. As we previously noted, under federal internal control standards, ongoing monitoring should occur in the course of normal program operations. CMS collects several types of data that give some insight into the quality of nursing homes, and these data show mixed results. However, these data could provide a clearer picture of nursing home quality if some underlying problems with the data are corrected. CMS is in the process of taking steps to address some of these problems—such as the rollout of focused surveys to evaluate the data used in quality measures and plans to use and audit payroll data rather than self-reported data to determine nursing home staffing levels. If properly implemented, completion of these steps—as well as pursuing other, longer-term plans such as the eventual standardization of the survey methodology across all states—has the potential to make nursing home quality data more comparable and accurate, allowing more effective tracking of nursing home quality trends. However, without specific timeframes with milestones to track implementation of a standardized survey methodology and clear ongoing audit plans, it is unclear whether these important steps will occur. Federal internal control standards require agencies to monitor performance data to assess the quality of performance over time, and CMS’s ability to do so is hindered by data issues. Timely completion of these actions is particularly important because Medicare payments to nursing homes will be dependent on quality data, through the implementation of the value based purchasing program, starting in fiscal year 2019. In addition to problems with the data used to measure nursing home quality, according to CMS officials, the agency faces the challenge of conducting effective oversight of nursing home quality with its limited resources, while meeting all of its oversight requirements. CMS has made modifications to some activities it considered essential to its oversight, without knowing whether the modifications have affected the agency’s ability to assess nursing home quality. Further, some modifications made by CMS regional offices and state survey agencies to their own nursing home oversight activities could adversely affect the CMS central office’s ability to oversee nursing home quality, while other modifications could be effective strategies that could be adopted more widely among regional offices and state survey agencies. Consistent with federal internal control standards, establishing an effective process for monitoring modifications of essential oversight activities made at the CMS central office, CMS regional office, and state survey agency levels—whether positive or negative—could allow CMS to better understand the effects these modifications may have on nursing home quality and make improvements to its own oversight. To improve the measurement of nursing home quality, the Administrator of CMS should take the following two actions: Establish specific timeframes, including milestones to track progress, for the development and implementation of a standardized survey methodology across all states. Establish and implement a clear plan for ongoing auditing to ensure reliability of data self-reported by nursing homes, including payroll- based staffing data and data used to calculate clinical quality measures. To help ensure modifications of CMS’s oversight activities do not adversely affect the agency’s ability to assess nursing home quality and that effective modifications are adopted more widely, the Administrator of CMS should establish a process for monitoring modifications of essential oversight activities made at the CMS central office, CMS regional office, and state survey agency levels to better understand the effects on nursing home quality oversight. We provided a draft of this report to HHS for its review and comment. HHS provided written comments, which are reprinted in appendix IV. In its written comments, HHS described its efforts to improve nursing home quality. HHS also concurred with the report’s three recommendations. To address our first recommendation, HHS stated that it would set timeframes and milestones for the development and implementation of a standardized survey methodology. To address our second recommendation, HHS stated that it would continue to work to address the reliability of self-reported data by, for example, continuing through fiscal year 2017 the auditing of clinical quality measures data, which began in fiscal year 2015. As we describe in this report, ongoing auditing of self-reported data is important for ensuring data accuracy; as a result, whenever self-reported data are used for understanding nursing home quality—including the new electronic payroll system for collecting staffing data and data used to calculate clinical quality measures—our recommendation indicates that HHS should plan for and conduct audits in a continuing manner. To address our third recommendation, HHS stated that it would review its monitoring of key oversight activities and make adjustments as indicated. HHS also provided technical comments, which we incorporated into the final version of this report as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from its date. At that time, we will send copies to the Secretary of Health and Human Services. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. This appendix describes our scope and methodology for examining the extent to which reported nursing home quality has changed in recent years and the factors that may have affected any observed changes. For this examination, we analyzed four sets of quality data from the Centers for Medicare & Medicaid Services (CMS). Each set of data provide an important perspective on quality and together can give a multi- dimensional view of potential changes in nursing home quality over time. We analyzed the four sets of data at both the national and state level for the time periods identified below, which represent the most recent data available for a ten-year period or its closest equivalent. At the national level we collected and analyzed data for all 50 states and Washington, D.C. At the state level we selected five states to focus our review— California, Florida, Massachusetts, Michigan, and West Virginia—based on factors such as variation in geographic region, size (number of nursing homes), and state performance standard scores. Deficiencies cited on standard surveys. To identify trends in the number of serious deficiencies—deficiencies at the actual harm or immediate jeopardy levels—cited during nursing home standard surveys, we analyzed data from CMS’s Certification and Survey Provider Enhanced Reports system for years 2005 through 2014. Specifically, we calculated the number of serious deficiencies cited during standard surveys in each year. Consumer complaints. To identify trends in the number of consumer complaints regarding resident care or safety reported by residents, families, ombudsmen, or others, we analyzed data from CMS’s Automated Survey Processing Environment Complaint/Incident Tracking System. Specifically, we calculated the total number of complaints reported—not substantiated—for all nursing homes for years 2005 through 2014. Nurse staffing. To identify trends in nurse staffing data, specifically the number of nursing hours per resident day, we analyzed data from CMS’s Certification and Survey Provider Enhanced Reports. Specifically, we collected quarterly staffing data on the nursing hours per resident day for years 2009 through 2014, calculated an average nurse staffing level, and used CMS’s formula to create adjusted nurse staffing levels. Clinical quality measures. To identify trends in clinical quality measures, we analyzed data from CMS’s Minimum Data Set—the data set containing the standardized clinical assessments nursing homes complete for all residents and report to CMS—for years 2011 through 2014. We selected eight CMS quality measures to include in our analysis based on factors such as endorsement by the National Quality Forum and data reliability. Six of the eight measures are used by CMS for long-stay residents—the percentage of residents who report moderate to severe pain; the percentage of high-risk residents with pressure ulcers; the percentage of residents who lose too much weight; the percentage of residents who were physically restrained; the percentage of residents experiencing one or more falls with major injury; and the percentage of residents who received antipsychotic medication—and the remaining two measures are used for short-stay residents—the percentage of residents who report moderate to severe pain and the percentage of residents with pressure ulcers that are new or worsening. To create an annual score for each quality measure we averaged quarterly data. Analysis across four data sets. For each of the four data sets, we ranked nursing homes by quartile and identified those at the upper quartile (worst performing) and lower quartile (best performing) for each year. We then counted the number of years each home fell into the upper or lower quartile for each quality measure to identify homes with consistently poor or good performance. We then identified homes with poor or good performance across all data sets. We also received a list from CMS of all Special Focus Facilities (SFF) for 2005 through 2014 to identify how many of the poor performers were or had been in the SFF program. Finally, we attempted to identify any commonalities among homes that consistently performed poorly compared to homes that performed well across the four data sets; for example, using Certification and Survey Provider Enhanced Reports files for each home, we examined bed size, non-profit or for-profit status, and urban or rural location (using zip codes and the Health Resources and Services Administration’s Area Resource File). We assessed the reliability of each of the four sets of data and determined that they were sufficiently reliable for purposes of describing trends through interviews with knowledgeable CMS officials, reviews of supporting documentation, and comparisons with other published data. Tables 3-5 provide state-level data for each of the four data sets. Specifically, Table 3 provides deficiencies cited on standard surveys and consumer complaint data, Table 4 provides nurse staffing data, and Table 5 provides selected quality measure data. CMS divides its nursing home activities into six dimensions—with the agency considering four of these dimensions “essential” and two “highly advisable”. In recent years, CMS has made adjustments to oversight activities within all dimensions. In addition to the contact name above, Will Simerl, Assistant Director; Wesley Dunn, Julianne Flowers, Krister Friday, Q. Akbar Husain, Kathryn Richter, Helen Sauer, and Karin Wallestad made key contributions to the report.
To help ensure nursing home residents receive quality care, CMS, an agency within the Department of Health and Human Services (HHS), defines quality standards homes must meet to participate in the Medicare and Medicaid programs. To monitor compliance with these standards, CMS enters into agreements with state survey agencies to conduct on-site surveys of the state's homes and also collects other data on nursing home quality. CMS and others have reported some potential improvements in nursing home quality. GAO was asked to study these trends. This report examines (1) the extent to which reported nursing home quality has changed in recent years and the factors that may have affected any observed changes, and (2) how CMS oversight activities have changed in recent years. GAO analyzed four sets of CMS quality data—deficiencies cited on standard surveys (2005-2014), consumer complaints (2005-2014), staffing levels (2009-2014), and a sub-set of clinical quality measures (2011-2014)—at both national and state levels. We also reviewed relevant documents, including CMS guidance and Standards for Internal Control in the Federal Government, and interviewed CMS and state agency officials at 5 states selected on factors such as size. In recent years, trends in four key sets of data that give insight into nursing home quality show mixed results, and data issues complicate the ability to assess quality trends. Nationally, one of the four data sets—consumer complaints—suggests that consumers' concerns over quality have increased, while the other three data sets—deficiencies, staffing levels, and clinical quality measures—indicate potential improvement in nursing home quality. For example, the average number of consumer complaints reported per home increased by 21 percent from 2005-2014, indicating a potential decrease in quality. Conversely, the number of serious deficiencies identified per home with an on-site survey, referred to as a standard survey, decreased by 41 percent over the same period, indicating potential improvement. The Centers for Medicare & Medicaid Services' (CMS) ability to use available data to assess nursing home quality is complicated by various issues with these data, which make it difficult to determine whether observed trends reflect actual changes in quality, data issues, or both. For example, clinical quality measures use data that are self-reported by nursing homes, and while CMS has begun auditing the self-reported data, it does not have clear plans to continue. Federal internal control standards require agencies to monitor performance data to assess the quality of performance over time. In recent years, CMS has made numerous modifications to its nursing home oversight activities, but has not monitored the potential effect of these modifications on nursing home quality oversight. Some of the modifications have expanded or added new oversight activities, while others have reduced existing oversight activities. According to CMS, some of the reductions to oversight activities are in response to an increase in oversight responsibilities and limited number of staff and financial resources. However, CMS has not monitored how the modifications might affect CMS's ability to assess nursing home quality. For example, CMS reduced the number of nursing homes participating in the Special Focus Facility program—which provides additional oversight of homes with a history of poor performance—from 152 in 2013 to 62 in 2014. State survey agency officials who conduct surveys for CMS also made modifications which could have either a positive or negative effect on oversight, but CMS does not have an effective mechanism for monitoring. Federal internal control standards require ongoing monitoring as a part of normal program operations; without this monitoring, CMS cannot ensure that any modifications in oversight do not adversely affect its ability to assess nursing home quality. GAO recommends, among other things, that CMS implement a clear plan for ongoing auditing of self-reported data and establish a process for monitoring oversight modifications to better assess their effects. HHS agreed with GAO's recommendations.
The Rocky Mountain Arsenal, established in 1942, occupies 17,000 acres northeast of Denver, Colorado, and is contaminated from years of chemical and weapons activities. The Army manufactured chemical weapons, such as napalm bombs and mustard gas, and conventional munitions until the 1960s and destroyed weapons at the Arsenal through the early 1980s. In addition, it leased a portion of the Arsenal to Shell Oil Company from 1952 to 1987 to produce herbicides and pesticides. The Arsenal was placed on the Environmental Protection Agency’s (EPA) National Priorities List, the list of the nation’s most heavily contaminated sites, in July 1987. More than 300 species of birds, mammals, amphibians, reptiles, and fish can be found on the installation. Once the EPA certifies the cleanup is complete, the Arsenal is to become a national wildlife refuge managed by the Fish and Wildlife Service. Refuge management activities are already underway. (App. I shows the key physical features of the Arsenal.) Waste disposal practices used by the Army and Shell in the past have resulted in extensive soil and groundwater contamination. Some of the common contaminants include nerve agents, diisopropyl methyl phosphorate (DIMP), and the pesticides dieldrin and aldrin. Other contaminants include heavy metals, such as arsenic, lead, chromium, and mercury, and volatile organic compounds, such as benzene, toluene, and xylene. The 209 contaminated sites on the Arsenal are divided into on-post and off-post segments. The on-post sites include all contaminated structures, water, and soil within the boundaries of the Arsenal. The off-post sites include a region north of the Arsenal requiring cleanup because of migrating groundwater contamination. Cleanup at the Arsenal is subject to the legal requirements of the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) of 1980, as amended (42 U.S.C. 9601); the Resource Conservation and Recovery Act of 1976, as amended (42 U.S.C. 6901); and state laws. (See app. II for a description of the CERCLA process.) The Army is in charge of the cleanup under a Federal Facility Agreement, which was signed in 1989. The signatories include the Army; Shell Oil Company; the EPA; and the Departments of Justice, the Interior, and Health and Human Services. The agreement established a framework for cleanup and a process to resolve formal disputes among the parties. However, the state of Colorado was not a party to the Federal Facility Agreement because of litigation with the Army and Shell. A court-appointed mediator facilitated negotiations between the parties over several years. The recent conceptual agreement for cleaning up Rocky Mountain Arsenal may mark a turning point in years of conflict that has slowed the implementation of permanent cleanup remedies and increased costs. According to Army, EPA, state of Colorado, and Shell officials, long-standing disagreements and extensive studies have diverted key staff and contractors away from the cleanup program and driven costs up. In the 20 years since the installation restoration program began, the Army and Shell have spent about $1 billion to study and control the environmental damage. The majority of the cost has been for studying the site and resolving disagreements. Totaling $354 million as of December 1994, the Arsenal’s study phase is the costliest in the history of DOD’s cleanup program. However, about $316 million was spent on interim remediation projects to cut off contamination pathways. These actions may contribute significantly to permanent solutions. (App. III contains a time line of the Arsenal’s installation restoration program.) The most recent delay in adopting a cleanup plan for the Arsenal was caused by disagreements over cost-effectiveness and alternative cleanup remedies. EPA’s and the state of Colorado’s initial cleanup proposals were estimated to cost about $2.7 billion; Shell Oil Company’s was $1.6 billion; and the Army’s was in the middle, at about $2.1 billion. According to officials from the Army, EPA, and the state of Colorado, the 2-year debate involved how to clean up contaminated soils on the Arsenal and contaminated water off the Arsenal. All parties agreed that soils should remain on-site, because moving them off-site would be prohibitively expensive. However, while the Army and Shell suggested that untreated soils be capped in place to prevent the spread of contaminants, EPA and the state suggested that contaminated soils should be treated to neutralize them, before they are capped or placed in a landfill. The key off-post issue involved groundwater quality standards for water contaminated with DIMP, a by-product of nerve agent production. In 1993, the state promulgated a drinking water standard of 8 parts per billion. The Army and Shell wanted to continue to pump and treat the water to meet EPA’s health advisory of 600 parts per billion, while the state wanted the Army to provide the residents with an alternative water supply. Largely due to the volume of lawsuits, formal disputes, and other disagreements, the Rocky Mountain Arsenal has experienced the costliest study phase in DOD’s history. According to DOD reports, the Arsenal’s study costs represent at least 16 percent of the Army’s total study costs for about 1,200 installations. The Arsenal’s study phase began more than 20 years ago and was completed recently, in October 1995, when the Army requested public comment on its preferred remedy. As of December 1994, Shell and the Army had spent approximately $354 million on studies, which represents about 37 percent of the total costs incurred by Shell and the Army at the Arsenal. Figure 1 shows shared cleanup costs by category. Historical cost (1975-87) Shell contribution remaining in special account Total: $961 million Over 400 studies have been conducted at the Arsenal since 1983. Approximately 14,000 samples were taken and 230 reports were produced during the study phase. Although the complexity of the site warranted study, according to Army, EPA, and state officials, the litigation and other disputes encouraged excessive and duplicative studies. For example, had the parties come to an earlier agreement on the installation’s future use and on levels of ecological standards, some of the studies might have been avoided. Relationships among the key parties have been strained by differences throughout the history of the cleanup program, but particularly since 1983 when two major lawsuits were filed. The Army sued Shell, and the state of Colorado sued the Army and Shell to recover compensation for natural resource damages and cleanup costs. The state sued the Army again in 1986 to enforce regulatory authority over parts of the cleanup. Although the Army and Shell settled their suit in 1988, the first Colorado case has not yet been resolved and the second case went to the U.S. Supreme Court. In January 1994, the Supreme Court refused to hear the case, letting stand the lower court’s decision in favor of Colorado’s jurisdiction. The key parties’ exhaustive efforts to resolve their legal disputes involved 7 years of assistance from a court-appointed mediator. (See app. IV for a detailed chronology of major legal actions involving Rocky Mountain Arsenal.) In addition to the lawsuits, more than 140 issues have been taken to formal dispute since 1987 under the Federal Facility Agreement, which allows the parties to dispute Army decisions. Disputes have been triggered by a variety of technical issues, often requiring further studies to resolve the controversy. For example, the parties disagreed about what level of dieldrin is considered safe in soil. The Army, EPA, and Shell have all conducted and evaluated studies on this issue, yielding different results and reaching different conclusions. This dispute was invoked in December 1987 and is still not resolved. According to Army, EPA, and state officials, study results are particularly sensitive because precedents set at the Arsenal could potentially have ramifications for Shell Oil Company at its other locations. Although final cleanup has not begun, the Army and Shell have made efforts to mitigate the most critical threats at the Arsenal. As of December 1994, they had spent about $316 million on source control and interim actions designed to provide immediate containment or treatment of some of the more highly contaminated areas. Early assessments, conducted between 1975 and 1985, identified ways to minimize the potential for exposure to and migration of contaminants. Resulting projects included the installation of three groundwater treatment systems at the Arsenal’s boundary, the closure of an abandoned well, and the removal of sewer lines known to be a source of soil and groundwater contamination. Building on earlier source control efforts, the Army began its interim actions in 1986 to control immediate problems while the final cleanup solutions were being determined. The resulting 14 interim actions were designed to be consistent with long-term comprehensive cleanup on and off the Arsenal. Two of these, the incineration of liquid waste from the Arsenal’s major disposal basin and the removal of asbestos, have permanently removed the hazardous materials. Table 1 shows, for each of the 14 actions, the start date, actual or estimated completion date, and the actual or estimated cost as of December 1994. If the parties are successful in adopting the on- and off-post cleanup plans as expected in 1996, the final cleanup can begin. The conceptual agreement reached in June 1995 resolved the major disputes and outlined a $2.1-billion cleanup to be completed in 2012. However, the current cost and completion targets may be overly optimistic given remaining uncertainties about the final details. In addition, costs have significantly increased over time at the Arsenal. According to the conceptual agreement, the parties are expected in 1996 to adopt a final cleanup plan or record of decision for a $2.1-billion cleanup effort. Although most of the cleanup is expected to be accomplished by 2012, groundwater treatment and monitoring will continue for at least 30 years. The conceptual agreement resolves the two most significant disputes among the parties, regarding contaminated soils on site and contaminated groundwater off site. The parties agreed that a portion of basin F, the most contaminated of the basins, will be solidified in place through a technique that binds the soil together to minimize the release of contaminants but does not destroy them. Contaminated soil excavated from the basin in 1988 will be removed from the basin area and contained, along with other highly contaminated portions of the Arsenal, in a hazardous waste landfill. The basin will then be capped. The parties also agreed on demolition and on-site disposal for buildings in the manufacturing areas. Structures with high levels of contamination, such as agent residues, may be treated to reduce the contamination before they are placed in the landfill. Structural debris that is uncontaminated or has low levels of contamination will not be disposed of in the landfill; it will be consolidated in the other major basin, basin A, and capped. Regarding off-site contaminated groundwater, the parties agreed to continue operating existing groundwater treatment systems at the Arsenal’s boundary, where the water will be treated to meet Colorado’s groundwater standard of 8 parts per billion of DIMP. The Army and Shell will also supply clean water to residents living near the Arsenal’s boundaries. The parties agreed in concept on a $2.1-billion cleanup, but until the record of decision is finalized, the cost and time frame estimates remain uncertain. The cleanup estimate reported to Congress just prior to the June settlement called for $2.3 billion in appropriated funds, in addition to Shell’s $500-million share, for a total of $2.8 billion. According to Army officials, the $2.8 billion represented a reduction from a $3.6-billion estimate prepared just 2 months earlier. The Army did not have a detailed analysis at the time of our fieldwork that explained how the conceptual agreement reduced the estimate to $2.1 billion. The Army expects to complete its analysis for the May 1996 record of decision. The Army’s projected cost estimates and cleanup dates have changed significantly since 1984. The $2.1 billion estimated for the conceptual agreement is 10 times greater than the best case estimate released a decade ago. The 1984 projections of a record of decision by 1990 and cleanup by 2000 are now estimated for 1996 and 2012, respectively. The cost and completion schedules recently established could be affected by numerous uncertainties. Budget limitations that reduce the scope or extend the life of the cleanup, cleanup complications, and evolving standards could drive up costs and extend time frames. In July 1994, we reported Army officials’ concern that stricter state standards could increase cleanup costs at the Rocky Mountain Arsenal by at least $1 billion. Although the conceptual agreement should make this less likely, Army officials noted continuing uncertainties regarding the scope of the state’s regulatory authority. In addition, the Army’s $2.1-billion cleanup estimate does not include an estimated $200 million for inflation, or costs of long-term operations and maintenance for the off-post treatment facility. Under the cost-sharing agreement between the Army and Shell, Shell’s share of cleanup costs decreases on a sliding scale from 50 percent to 20 percent as total costs increase. The agreement was reached in 1989, when the cost estimates were lower than now. According to officials from the Army, EPA, and the Department of Justice, the formula was based on the best available knowledge of risk and damages at the time. However, Shell’s share of total costs has dropped significantly as cleanup costs exceeded the early estimates; the current estimate is more than 3 times higher than estimated at the time of the settlement. According to Arsenal and Shell officials, the Army will pay about $1.6 billion, and Shell about $500 million toward the $2.1 billion cleanup. When the permanent cleanup begins, Shell’s 20 percent share of the costs will be significantly less than its share of remaining contaminants. Because its operations contributed to the contamination problem, Shell agreed to pay a portion of the cleanup costs. The cost-sharing formula divides cleanup costs equally between the Army and Shell for the first $500 million of allocable or shared costs, but then reduces Shell’s share to 35 percent of the next $200 million of these costs, and 20 percent of all allocable costs exceeding $700 million. Each party agreed to absorb its own program management costs. “Army-only” and “Shell-only” costs, for contamination solely attributed to each party, are also excluded from the allocable formula. When the Army and Shell adopted the cost-sharing formula, cleanup costs were expected to be less than $700 million, not the currently estimated $2.1 billion. Even though the permanent cleanup is not yet underway, the parties have already arrived at the second level of the cost-sharing formula; allocable costs reached $500 million in 1994. According to Army, EPA, and state officials, Shell’s 20-percent share of the final costs has an inverse relationship to its share of remaining contaminants that are to be cleaned up. They stated that from a risk management perspective, the contaminants driving the majority of the final cleanup costs will be those related to Shell’s production activities. According to Army and EPA officials, the cost-sharing formula was negotiated when much less was known about the extent of Arsenal contaminants and associated risks. In addition, an Army attorney said that the decision to reduce Shell’s share as costs increased was an equitable way of recognizing that the Army owned the installation and the disposal systems that Shell used. In retrospect, these officials noted that a declining formula is probably not the best approach to use in allocating shares, particularly early in the study phase before the contaminants have been fully characterized. The Army and Shell have already spent nearly $1 billion of the current $2.1-billion estimate. As of December 1994, the Army had spent about $687 million of its estimated $1.6-billion share and Shell had contributed about $274 million of its expected $500-million share. The Army’s $687-million share breaks down into about $431 million in shared or allocable costs and $256 million in Army-only costs. Total allocable costs paid by both parties represent about $589 million of the total. Although Shell contributed about $274 million toward the allocable costs, the Army has not yet spent $80 million of this amount. Figure 2 shows Army and Shell expenditures as of December 1994. Army ($687 million) Shell ($274 million) Shell pays its share of cleanup costs directly to a government account. As of December 1994, Shell had contributed about $274 million of the $500 million it is expected to pay. About $116 million of the $274 million was deposited into the Shell account, and the other $158 million represented costs Shell incurred directly at the Arsenal. Shell was credited, for example, for conducting one of the Arsenal’s costliest projects—the incineration of liquid waste. Legislation restricts use of Shell’s reimbursements to cleanup projects at the Arsenal. As of December 1994, the Army had spent approximately $36 million from the $116 million that Shell had deposited into the account, leaving about $80 million for future obligations. The funds are retained by the U.S. Treasury until they are requested. According to Army officials, the funds in the Shell account are generally not used to offset budget requirements. Rather, the funds are used to supplement appropriations from the Defense Environmental Restoration Account. The Arsenal’s annual work plans outline requirements for appropriated funds, and those requirements are rolled up and consolidated into a DOD budget request. Therefore, according to these officials, the Shell funds are not visible in the budgeting process as requests proceed from the Army to DOD and Congress and do not influence funding decisions. Officials said it is not feasible to use the Shell funds to offset budget requirements in most instances because they do not represent a steady fixed flow and they are not fiscal year specific. The Arsenal’s allocation for fiscal year 1995 was about $70 million, which is less than the balance available in the Shell account. In discussing a draft of our report, DOD officials agreed with the report’s findings and conclusions. Their comments have been incorporated where appropriate. We performed our work at the Rocky Mountain Arsenal, Commerce City, Colorado; EPA’s Region VIII headquarters; and the Colorado Department of Health, Denver. To determine the status of the cleanup work at the Rocky Mountain Arsenal, we attended public hearings and reviewed applicable documents and records maintained by DOD and EPA. We also interviewed officials from the Departments of the Army, the Interior, and Justice; EPA; and the state of Colorado. To assess plans for future cleanup at the Arsenal, we interviewed officials from the Army, EPA, the Fish and Wildlife Service, and the state of Colorado. We also reviewed the Federal Facility Agreement and the conceptual agreement for Arsenal cleanup. To understand the cost-sharing arrangement between the Army and Shell, we reviewed the settlement agreement, financial manual, and other pertinent documents. We also interviewed officials from the Army, EPA, and the Department of Justice. We conducted our review from October 1994 to January 1996 in accordance with generally accepted government auditing standards. Unless you publicly announce its contents earlier, we plan no further distribution of the report until 30 days after its issue date. At that time, we will send copies to appropriate congressional committees; the Secretaries of Defense and the Army; the Administrator, EPA; and the Director of the Office of Management and Budget. We will also make copies available to others upon request. Please contact me on (202) 512-8412 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix V. Located 9 miles northeast of downtown Denver, Rocky Mountain Arsenal is adjacent to the communities of Commerce City, Montbello, and rural Adams County. Key physical features of the Arsenal include the north and south chemical manufacturing complexes, numerous pits and trenches, and a series of man-made lakes and basins A through F. Liquid waste from the two manufacturing complexes was discharged into basins A, B, C, D, and E, a series of unlined waste evaporation ponds. In the mid-1950s, the Army discharged all liquid waste to basin F, a newly constructed asphalt-lined waste basin. Solid waste was disposed of in the trenches and pits. The man-made lakes were used to provide process and cooling water to facilities within the south plants area. (See fig. I.1.) The initial stage of the cleanup program is an installationwide study to determine if sites are present that pose hazards to public health or the environment. Available information is collected on the source, nature, extent, and magnitude of actual and potential hazardous substance releases at sites on the installation. The next step consists of sampling and analysis to determine the existence of actual site contamination. Information gathered is used to evaluate the site and determine the response action needed. Uncontaminated sites do not proceed to later stages of the process. Remedial investigation may include a variety of site investigative, sampling, and analytical activities to determine the nature, extent, and significance of the contamination. The focus of the evaluation is determining the risk to the general population posed by the contamination. Concurrent with the remedial investigations, feasibility studies are conducted to evaluate remedial action alternatives for the site to determine which would provide the protection required. Detailed design plans for the remedial action alternative chosen are prepared. The chosen remedial alternative is implemented. Remedial actions can be taken at any time during the cleanup process to protect public health or to control contaminant releases to the environment. Memorandum of Agreement signed by state of Colorado, the Army, Shell Oil Company, and the Environmental Protection Agency. U.S. Army litigation against Shell Oil Company for natural resource damages and cleanup costs. State of Colorado filed suit for damages to natural resources and state money spent responding to contamination. Memorandum of Agreement considered invalid. Colorado filed suit to enforce Army compliance with the Resource Conservation and Recovery Act on basin F. Army and Shell Oil Company settled 1983 suit by signing consent decree. State of Colorado won the 1986 suit and issued an administrative order requiring the Army to follow its closure plan at basin F; Army filed suit disputing administrative order. Court granted Army’s motion and affirmed EPA’s role as final authority at Rocky Mountain Arsenal; state appealed. 10th Circuit Court of Appeals ruled in favor of Colorado. Army appealed to U.S. Supreme Court. Certiorari denied. Patricia Foley Hinnen Maria Durant Mark McClarie Stephen Gaty The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the cleanup program at the Rocky Mountain Arsenal, focusing on the: (1) status of cleanup efforts; (2) completion plans for the cleanup; and (3) cost-sharing plans between the Army and Shell Oil Company, which leased a portion of the Arsenal. GAO found that: (1) permanent cleanup at Rocky Mountain Arsenal has been delayed for years due to lawsuits and numerous other disputes between the parties involved; (2) in June 1995, Colorado and five other key parties signed an agreement for a conceptual remedy to address the lawsuits and disputes; (3) although about $300 million of the nearly $1 billion spent to date has been for interim actions to mitigate the most urgent environmental threats, the majority has been spent on studies and other management activities; (4) the June 1995 conceptual agreement resolves the most significant issues and paves the way for a final settlement, or record of decision, in 1996; (5) based on the agreement, the Army currently estimates the cleanup will cost $2.1 billion and take until 2012; (6) prior to the agreement, the Army had estimated a $2.8-billion to $3.6-billion cleanup effort to be complete in about 2010; (7) although the agreement addresses many of the disputed issues, the final details are yet to be negotiated; (8) until the cleanup plan is detailed and finalized in the record of decision, the cost and completion estimates will be subject to change; (9) under a 1989 settlement, the Army and Shell are sharing cleanup costs, and the costs to correct damages attributable solely to either the Army or to Shell are to be financed by the responsible party; (10) however, most contamination was commingled, and these cleanup costs will be shared under a formula requiring each party to pay 50 percent of the first $500 million in cleanup costs, with Shell's share decreasing as total costs increase; (11) although the agreement does not limit total contributions, Shell estimated its total costs will be about $500 million and so far has contributed $274 million; (12) by the time the final phase of cleanup begins in May 1996, under an expected record of decision, the Army will be responsible for 80 percent of the costs for commingled contamination; and (13) these costs represent most of the remaining cleanup.
In our 2005 report, we found that facilities-related problems at the Smithsonian had resulted in a few building closures and access restrictions and some cases of damage to the collections. A few facilities had deteriorated to the point where access must be denied or limited. For example, the 1881 Arts and Industries Building on the National Mall was closed to the public in 2004 for an indefinite period, pending repair of its weakened roof panels, renovation of its interior (which had been damaged by water intrusion), and replacement of aging systems such as heating and cooling. Currently, this building remains closed. Other facilities also faced problems. We found that water leaks caused by deteriorated piping and roofing elements, along with humidity and temperature problems in buildings with aging systems, posed perhaps the most pervasive threats to artifacts in the museums and storage facilities. For example, leaks have damaged two historic aircraft at the National Air and Space Museum. Additionally, Smithsonian Archives officials told us that they had had to address 19 “water emergencies” since June 2002. These problems were indicative of a broad decline in the Smithsonian’s aging facilities and systems that posed a serious long-term threat to the collections. We also found that the Smithsonian had taken steps to maximize the effectiveness of its resources for facilities. These changes resulted from an internal review and a 2001 report by the National Academy of Public Administration, which recommended that the Smithsonian centralize its then highly decentralized approach to facilities management and budgeting in order to promote uniform policies and procedures, improve accountability, and avoid duplication. The Smithsonian created the Office of Facilities Engineering and Operations in 2003 to assume responsibility for all facilities-related programs and budgets. At the time of our 2005 review, this office was adopting a variety of recognized industry best practices for managing facilities projects, such as the use of benchmarking and metrics recommended by the Construction Industry Institute and leading capital decision-making practices. Preliminary results from our ongoing work show that as of March 30, 2007, the Smithsonian estimates it will need about $2.5 billion for revitalization, construction, and maintenance projects identified from fiscal year 2005 through fiscal year 2013, an increase of about $200 million from its 2005 estimate of about $2.3 billion for the same time period. Smithsonian officials stated that to update this estimate, they identified changes that had occurred to project cost figures used in the 2005 estimate and then subtracted from the new total the appropriations the Smithsonian had received for facilities revitalization, construction, and maintenance projects for fiscal years 2005-2007. According to Smithsonian officials, this estimate includes only costs for which the Smithsonian expects to receive federal funds. Projects that have been or are expected to be funded through the Smithsonian’s private trust funds were not included as part of the estimate, although the Smithsonian has used these trust funds to support some facilities projects. For example, the Steven F. Udvar-Hazy Center was funded largely through trust funds. According to Smithsonian officials, maintenance and capital repair projects are not generally funded through trust funds. At the time of our 2005 report, Smithsonian officials told us that the Smithsonian’s estimate of about $2.3 billion could increase for a variety of reasons. For example, the estimate was largely based on preliminary assessments. Moreover, in our previous report, we found that recent additions to the Smithsonian’s building inventory—the National Museum of the American Indian and the Steven F. Udvar-Hazy Center—and the reopening of the revitalized Donald W. Reynolds Center for American Art and Portraiture on July 1, 2006 would add to the Smithsonian’s annual maintenance costs. According to Smithsonian officials, the increase in its estimated revitalization, construction, and maintenance costs through fiscal year 2013 from about $2.3 billion in our 2005 report to about $2.5 billion as of March 30, 2007, was due to several factors. For example, Smithsonian officials said that major increases had occurred in projects for the National Zoo and the National Museum of American History because the two facilities had recently had master plans developed that identified additional requirements. In addition, according to Smithsonian officials, estimates for anti-terrorism projects had increased due to adjustments for higher costs experienced and expected for security-related projects at the National Air and Space Museum. According to Smithsonian officials, the increase also reflects the effect of delaying corrective work in terms of additional damage and escalation in construction costs. According to Smithsonian officials, the Smithsonian’s March 30, 2007, estimate of about $2.5 billion could also increase, as the about $2.3 billion estimate was largely based on preliminary assessments, and therefore, as the Smithsonian completes more master plans, more items will be identified that need to be done. Moreover, this estimate does not include the estimated cost of constructing the National Museum of African American History and Culture, which was authorized by Congress and which the Smithsonian notionally estimates may cost about $500 million, half of which is to be funded by Congressional appropriations. The Smithsonian’s annual operating and capital program revenues come from its own private trust fund assets and its federal appropriation. According to Smithsonian officials, the Smithsonian’s federal appropriation totaled nearly $635 million in fiscal year 2007, with about $99 million for facilities capital and about $536 million for salaries and expenses, of which the facilities maintenance appropriation, which falls within the salaries and expenses category, was about $51 million. In our previous work, we found that the facilities projects planned for the next 9 years exceeded funding at this level. As a result, we recommended that the Secretary of the Smithsonian establish a process for exploring options for funding its facilities needs and engaging the key stakeholders—the Smithsonian Board of Regents, the Administration, and Congress—in the development and implementation of a strategic funding plan to address the revitalization, construction, and maintenance projects identified by the Smithsonian. Smithsonian officials told us during our current review that the Smithsonian Board of Regents —the Smithsonian’s governing body, which is comprised of both private citizens and members of all three branches of the federal government—has taken some steps to address our recommendation. In June 2005, the Smithsonian Board of Regents established the ad-hoc Committee on Facilities Revitalization to explore options to address the about $2.3 billion the Smithsonian estimated it needed for facilities revitalization, construction, and maintenance projects through fiscal year 2013. In September 2005, the ad-hoc committee held its first meeting, at which it reviewed nine funding options that had been prepared by Smithsonian management for addressing the about $2.3 billion in revitalization, construction, and maintenance projects through fiscal year 2013. These options included the following: Federal income tax check off contribution, in which federal income tax returns would include a check-off box to allow taxpayers to designate some of their tax liability to a special fund for the Smithsonian’s facilities. Heritage treasures excise tax, in which an excise tax would be created, and possibly levied on local hotel bills, to generate funds for the Smithsonian’s facilities. National fundraising campaign, in which the Smithsonian would launch a national campaign to raise funds for its facilities. General admission fee program, in which the Smithsonian would institute a general admission charge to raise funds for critical but unfunded requirements. Special exhibition fee program, in which the Smithsonian would charge visitors to attend a select number of special exhibitions as a means to raise funds to meet critical but unfunded requirements. Smithsonian treasures pass program, in which the Smithsonian would design a program through which visitors could purchase a Smithsonian treasures pass with special benefits, such as no-wait entry into facilities or behind-the-scenes tours, to raise funds to meet critical but unfunded requirements. Facilities revitalization bond, in which the Smithsonian would borrow funds such as through a private or public debt bond for the Smithsonian’s facilities. Closing Smithsonian museums, in which the Smithsonian would permanently or temporarily close museums to the public in order to generate savings to help fund its facilities. Increasing Smithsonian appropriations, in which the Board of Regents and other friends of the Smithsonian would approach the Administration about a dramatic appropriations increase to fund Smithsonian’s facilities. According to Smithsonian officials, after considering these nine proposed options, the ad-hoc committee decided to request an increase in the Smithsonian’s annual federal appropriations, specifically deciding to request an additional $100 million over the Smithsonian’s current appropriation annually for 10 years, starting in fiscal year 2008, to reach a total of an additional $1 billion. In September 2006, according to Smithsonian officials, several members of the Board of Regents and the Secretary of the Smithsonian met with the President of the United States to discuss the issue of increased federal funding for the Smithsonian’s facilities. According to Smithsonian officials, during the meeting, among other things, the Regents discussed the problem of aging facilities and the need for an additional $100 million in federal funds annually for 10 years to address the institution’s facilities revitalization, maintenance, and construction needs. According to Smithsonian officials, the representatives of the Smithsonian at the meeting told the President that they had no other options to obtain this $100 million except the Smithsonian’s federal appropriation. According to Smithsonian officials, these representatives said the Smithsonian had made considerable expense cuts and raised substantial private funds, but donors are unwilling to donate money to repair and maintain facilities. The President’s fiscal year 2008 budget proposal, published in February 2007, proposed an increase of about $44 million over the Smithsonian’s fiscal year 2007 appropriation. The Smithsonian’s appropriation is divided into two categories. The about $44 million increase in the President’s budget proposal represented an increase of about $9 million for facilities capital and an increase of about $35 million for salaries and expenses, which includes facilities maintenance. However, funds in the salaries and expenses category also support many other activities, such as research, collections, and exhibitions, and it is not clear how much of the $35 million increase the Smithsonian would use for facilities maintenance. Moreover, Congress may choose to adopt or modify the President’s budget proposal when funds are appropriated for the fiscal year. As part of our ongoing work, we are reviewing the Smithsonian’s analysis of each funding option, including its potential for addressing its revitalization, construction, and maintenance needs. We plan to report on these issues later in the year. The Smithsonian’s estimate for revitalization, construction, and maintenance needs has increased at an average of about $100 million a year over the past 2 years. Therefore, the Smithsonian’s request for an additional $100 million a year may not actually reduce the Smithsonian’s estimated revitalization, construction, and maintenance needs but only offset the increase in this estimate. Absent significant changes in the Smithsonian’s funding strategy or significant increases in funding from Congress, the Smithsonian faces greater risk to its facilities and collections over time. Since our work is still ongoing, it remains unclear why the Smithsonian has only pursued one of its nine options for increasing funds to support its significant facilities needs. At this time, we still believe our recommendation that the Smithsonian explore a variety of funding options is important to reducing risks to the Smithsonian’s facilities and collections. Madam Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. We conducted our work for this testimony in March 2007 in accordance with generally accepted government auditing standards. Our work is based on our past report on the Smithsonian’s facilities management and funding, our review of Smithsonian documents, and interviews with Smithsonian officials. Specifically, we reviewed the Smithsonian’s revised estimated costs for major revitalization projects from fiscal year 2005 through fiscal year 2013 and documents from the Board of Regents. We also reviewed the President’s fiscal year 2008 proposed budget and the Smithsonian’s federal appropriations from fiscal years 2005-2007. We are continuing to evaluate the Smithsonian’s efforts to strategically manage, fund, and secure its real property. Our objectives include assessing (1) the extent to which the Smithsonian is strategically managing its real property portfolio, (2) the extent to which the Smithsonian has developed and implemented strategies to fund its revitalization, construction, and maintenance needs, and (3) the Smithsonian’s security cost trends and challenges, including the extent to which the Smithsonian has followed key security practices to protect its assets. We are also examining how similar institutions, such as other museums and university systems, strategically manage, fund, and secure their real property. We expect to report on these issues later this year. In addition to those named above, Colin Fallon, Brandon Haller, Carol Henn, Susan Michal-Smith, Dave Sausville, Gary Stofko, Alwynne Wilbur, Carrie Wilks, and Adam Yu made key contributions to this report. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Smithsonian Institution (Smithsonian) is the world's largest museum complex and research organization. The age of the Smithsonian's structures, past inattention to maintenance needs, and high visitation levels have left its facilities in need of revitalization and repair. This testimony discusses our prior work on some effects of the condition of the Smithsonian's facilities and whether the Smithsonian has taken steps to maximize facility resources. It also discusses the current estimated costs of the Smithsonian's needed facilities projects. In addition, it describes preliminary results of GAO's ongoing work on the extent to which the Smithsonian developed and implemented strategies to fund these projects, as GAO recommended in prior work. The work for this testimony is based on GAO's 2005 report, Smithsonian Institution: Facilities Management Reorganization Is Progressing, but Funding Remains a Challenge; GAO's review of Smithsonian documents and other pertinent information; and interviews with Smithsonian officials. In 2005, GAO reported that facilities-related problems at the Smithsonian had resulted in a few building closures and posed a serious long-term threat to the collections. For example, the 1881 Arts and Industries Building on the National Mall was closed to the public in 2004 for an indefinite period over concern about its deteriorating roof structure. Currently, this building remains closed. GAO also found that the Smithsonian had taken steps to maximize the effectiveness of its existing resources for facilities. Preliminary results of GAO's ongoing work indicate that as of March 30, 2007, the Smithsonian estimated it would need about $2.5 billion for its revitalization, construction, and maintenance projects from fiscal year 2005 through fiscal year 2013, up from an estimate of $2.3 billion in 2005. In 2005, GAO recommended that the Smithsonian develop and implement a strategic funding plan to address its facilities needs. The Smithsonian Board of Regents--the Smithsonian's governing body--has taken some steps to address GAO's recommendation regarding a strategic funding plan. The board created an ad-hoc committee, which, after reviewing nine options, such as establishing a special exhibition fee, decided to request an additional $100 million annually in federal funds for facilities for the next 10 years, for a total of an additional $1 billion. The President's fiscal year 2008 budget proposal, however, proposes an increase of about $44 million over the Smithsonian's fiscal year 2007 appropriation. It is not clear how much of this proposed increase would be used to support facilities, and how Congress will respond to the President's budget request. Absent significant changes in the Smithsonian's funding strategy or significant increases in funding from Congress, the Smithsonian faces greater risk to its facilities and collections over time. GAO is continuing to evaluate the Smithsonian's efforts to strategically manage, fund, and secure its real property. We expect to publish a report on these issues later this year.
Motor vehicle crashes are complex events resulting from several factors, including driver behavior, the driving environment, and the vehicle. Vehicle design can affect safety through crashworthiness—that is, by providing occupants protection during a crash—and through crash avoidance—that is, by helping the driver to avoid a crash or recover from a driving error. Vehicle characteristics such as size, weight, and the type of restraint system affect crashworthiness because they play a large role in determining the likelihood and extent of occupant injury from a crash. Vehicle characteristics such as vehicle stability and braking performance are examples of crash avoidance features in that they aid the driver in preventing a crash from occurring. The New Car Assessment Program (NCAP) was established in response to a requirement in the Motor Vehicle Information and Cost Savings Act of 1972 to provide consumers with a measure of the relative crashworthiness of passenger vehicles. NCAP’s goals are to improve occupant safety by providing market incentives for vehicle manufacturers to voluntarily design vehicles with improved crashworthiness and provide independent safety information to aid consumers in making comparative vehicle purchase decisions. NHTSA has pursued these goals by conducting frontal and side crash tests and a rollover test, assigning star ratings, and reporting the results to the public. In fiscal year 2004, NCAP conducted 85 crash tests and 36 rollover tests, with a budget of $7.7 million. NHTSA also administers the Federal Motor Vehicle Safety Standards. All motor vehicles sold in the United States for use on the nation’s highways must meet minimum safety requirements as required by the standards. The standards prescribe a minimum performance level for crashworthiness that vehicles must meet in a number of different crash tests. Auto manufacturers self-certify that their vehicles meet these minimum standards. To test compliance with some of these standards, NHTSA conducts 30 miles per hour (mph) frontal impact tests and 33.5 mph side impact tests for belted occupants. The Federal Motor Vehicle Safety Standards tests serve as a foundation for NCAP testing. The test protocols for NCAP’s frontal and side crash tests are the same as the safety standards, except that the NCAP tests are conducted at 5 mph faster. NHTSA’s policy, although not required by law, has been to make changes to the safety standards before considering changes to NCAP. When considering changes to NCAP, NHTSA generally follows the informal rulemaking process, which includes seeking comments on proposed changes. NCAP provides consumers with information regarding the crashworthiness of new cars beyond the applicable Federal Motor Vehicle Safety Standards with which all vehicles sold in the United States must comply. There are no minimum performance levels for the NCAP tests. NHTSA tests as many vehicles as possible under NCAP to provide consumers with sufficient independent information to make vehicle comparisons. In contrast, NHTSA relies on auto manufacturers to self-certify compliance with the Federal Motor Vehicle Safety Standards and only conducts a limited number of tests to ensure manufacturer compliance. NHTSA conducted the first NCAP crash tests in 1978 on model year 1979 vehicles, measuring only the crashworthiness of passenger cars in frontal crashes. Since then, there have been a number of vehicle tests added to NCAP, as shown in figure 3. For model year 1983, NHTSA expanded NCAP to include light trucks, vans, and SUVs. In 1996, NHTSA first began the side-impact NCAP test for model year 1997 vehicles. NHTSA expanded the side-impact NCAP test to include light trucks, vans, and SUVs for model year 1999. NHTSA began to rate vehicles for their rollover risk beginning with the 2001 model year. NHTSA initially rated the risk of vehicle rollover by measuring the top-heaviness of a vehicle and comparing this measurement to the top-heaviness of vehicles involved in single-vehicle crashes, as reflected in crash data. As required by the November 2000 Transportation, Recall Enhancement, Accountability and Documentation (TREAD) Act, NHTSA began dynamic rollover testing on model year 2004 vehicles to supplement the measurement of a vehicle’s top-heaviness in determining a vehicle’s rollover risk. NHTSA conducts three types of tests in NCAP: a full frontal crash test, an angled side crash test, and a rollover test. NCAP ratings, designed to aid consumers in deciding which vehicle to purchase, are available to the public on the Internet and through NHTSA’s Buying a Safer Car brochure. NCAP crash results are also used in developing vehicle safety ratings by other organizations, such as Consumer Reports and The Car Book. Every year NHTSA tests new vehicles that are predicted to have high sales volume, have been redesigned with structural changes, or have improved safety equipment. NHTSA purchases vehicles—the base model with standard equipment—for frontal and side crash tests directly from dealerships across the country, just as the consumer would. The vehicles are provided to five contractors that conduct the crash tests. NCAP crash- test ratings only apply to belted occupants, as the crash test dummies used in NCAP tests are secured with the vehicle’s safety belts. According to NHTSA officials, NCAP crash-test ratings are available on about 85 percent of the new vehicles sold because ratings for some models that have had no significant safety or structural changes are carried over from year to year. For the rollover tests, which are nondestructive, NHTSA leases new vehicles, which are tested at one contractor location. Rollover risk ratings are available for about 75 percent of new vehicles sold, according to NHTSA officials. The full frontal crash test is the equivalent of two identical vehicles, both traveling at 35 mph, crashing into each other head-on. The test vehicle is attached to a cable and towed along a track at 35 mph so that the entire front end of the vehicle engages a fixed rigid barrier, as shown in figure 4. This type of crash test produces high level occupant deceleration, making this test demanding of the vehicle’s restraint system. Click the following link to watch a video of a full frontal crash test conducted by NHTSA NCAP at 35 mph: http://www.gao.gov/media/video/d05370v1.mpg Because the full frontal crash test is equivalent to two identical vehicles moving toward each other at 35 mph, the crash test results can only be compared to other vehicles in the same class and with a weight that is plus or minus 250 pounds of the test vehicle. The test protocols for the full frontal NCAP test are the same as the full frontal belted test in the Federal Motor Vehicle Safety Standards, with the exception of the test speed—the NCAP test is conducted at 35 mph, 5 mph faster than the standard test. The angled side crash test simulates an intersection collision in which one moving vehicle strikes another moving vehicle. The test vehicle is positioned such that the driver’s side forms a 63 degree angle with the test track. On the other end of the test track is a chassis with a barrier also turned at a 63 degree angle. The barrier is made of a deformable material to replicate the front of another vehicle and is attached to a cable that tows it down a track into the test vehicle at 38.5 mph. Both the barrier face and the driver’s side of the vehicle are parallel, so that the entire face of the barrier impacts the side of the vehicle, as shown in figure 5. Click the following link to watch a video of an angled side crash test conducted by NHTSA NCAP at 38.5 mph: http://www.gao.gov/media/video/d05370v2.mpg Because all vehicles are hit with the same force by the same moving barrier, test results can be compared across weight classes. The barrier used in this test weighs approximately 3,015 pounds, and the top of the deformable face is approximately 32 inches from the ground. The side NCAP test is similar to the Federal Motor Vehicle Safety Standards test, with the exception that the side NCAP test is conducted at 38.5 mph, or 5 mph faster than the safety standard test. The dynamic rollover test simulates a driver making a high-speed collision avoidance maneuver—steering sharply in one direction, then sharply in the other direction—within about 1 second. NHTSA has focused its rollover test primarily on pickups and SUVs because cars are not susceptible to tipping up in this test. The rollover test is actually a series of four runs, two left/right tests and two right/left tests, at two different steering wheel angles and different speeds. Before the test, the vehicle is loaded to represent five passengers and a full tank of gas. During the test, the steering wheel is turned sharply in one direction at a high speed and then turned sharply in the opposite direction at a greater steering angle. The first run of each test is conducted at 35 mph. Subsequent runs are conducted at about 40 mph, 45 mph, 47.5 mph and 50 mph, until the vehicle fails or “tips up” as defined by test procedures or attains a speed of 50 mph on the last run of each test without tipping up. Tipping up is defined as both wheels on one side of the vehicle lifting off the ground more than 2 inches simultaneously, which most commonly occurs during the second turn, as exhibited in figure 6. Outriggers are attached to the vehicle to prevent it from tipping all the way over and injuring the test driver. NHTSA separately rates the frontal, side, and rollover tests. It assigns one (worst) to five (best) stars to communicate the results of the three tests to aid consumers in their vehicle purchase decisions. Each star in the frontal and side ratings corresponds to a diminishing probability of a potentially life-threatening injury, whereas each star in the rollover rating corresponds to a reduced likelihood of vehicle rollover. The rollover rating does not represent the chance of a potentially life-threatening injury should a rollover crash occur. Frontal and side star ratings represent the chances of a person wearing a safety belt incurring an injury serious enough to require immediate hospitalization or to be life threatening in the event of a crash. Frontal star ratings indicate the combined chance of a serious head and chest injury to the driver and right front seat passenger, as shown in figure 7. Side star ratings indicate the chance of a serious chest injury to the driver and the rear seat driver’s side passenger, as shown in figure 8. NHTSA reports two separate star ratings for the frontal and side crash test, according to the occupant position. In the side and frontal test, NHTSA uses crash test dummies that represent an average-sized adult male. Each dummy is secured with the vehicle’s safety belts prior to the crash test. The dummies are affixed with instruments that measure the force of impact experienced in different parts of the body during the crash. While only forces to the head and chest are used to calculate the frontal star ratings, impacts to each dummy's neck, pelvis, legs, and feet are also measured. For the frontal rating, NHTSA calculates the chance of serious injury to the head and chest by linking measured forces on the dummies’ heads and chests during the crash test to information about human injury. For the side rating, NHTSA calculates the chance of serious injury to the chest by linking measured forces on the dummies’ ribs and lower spine during the crash test and information about human injury. Forces to the head and pelvis are also measured but are not included in side star ratings. NHTSA’s rollover star ratings represent the propensity of a vehicle to roll over but do not address the probability of a severe injury in a rollover crash. Knowing a vehicle’s propensity to roll is important because rollovers are the most deadly crashes. While totaling just over 2 percent of police reported crashes, rollovers account for almost one-third of all passenger vehicle occupant fatalities. The crash avoidance rollover rating is based primarily on the measure of a vehicle’s top-heaviness, as shown in figure 9, and, to a lesser extent, the results of the dynamic test. NHTSA uses the measure of a vehicle’s top-heaviness to predict the likelihood of a vehicle rolling over under the circumstances that occur most often—when a vehicle leaves the roadway and the vehicle’s wheels hit a curb, soft shoulder, or other roadway object, causing it to roll over. These “tripped” rollovers account for about 95 percent of all rollover crashes. NHTSA’s dynamic rollover test does not correspond to these types of rollovers because it does not involve the vehicle hitting a tripping mechanism, such as a curb or soft shoulder. As such, NHTSA’s dynamic rollover test does not affect the star rating significantly, resulting in no more than a half-star difference in a vehicle’s rollover rating. NHTSA primarily selects top-heavy vehicles, such as light trucks, small vans, and SUVs for the rollover test. NHTSA assigns one to five stars to reflect the chance of rollover, as shown in figure 10. NHTSA distributes NCAP safety ratings and information about a vehicle’s safety features through its Web site, press releases, and the Buying a Safer Car brochure. NHTSA primarily relies on the Web site to educate consumers about vehicle safety; in 2004 there were about 4.3 million visits to the NCAP Web site. The Web site was last redesigned in August 2004 and provides information about crash test ratings from model year 1990 to the present. To view a vehicle’s ratings, users can search using parameters such as vehicle class, year, make, and model. Once a vehicle class and year are selected, the list of vehicles comes up with the star rating information, as shown in figure 11. Users can get more detailed information about the vehicle’s star rating by selecting a specific vehicle, as shown in figure 12. In addition to the Web site, NCAP’s star ratings and a list of vehicles’ safety features are available in the Buying a Safer Car brochure. The American Automobile Association primarily distributes the brochure, and it is also available at NHTSA’s regional offices, state highway safety offices, and libraries. For vehicle model year 2004, NHTSA published 25,000 copies of the Buying a Safer Car brochure. For vehicle model year 2005, NHTSA published a first printing of the brochure in December 2004. In addition, it plans to print a second brochure in spring 2005. While the 2004 edition does not have all the test results for model year 2005, it has a large number of carryover vehicles from model year 2004 plus some early 2005 tests. Other sources of vehicle safety information that use data from NCAP crash tests include Consumer Reports and The Car Book. Consumer Reports takes into consideration a vehicle’s performance in NHTSA NCAP tests and tests conducted by the Insurance Institute for Highway Safety (Insurance Institute) to determine an overall crash-protection rating. Instead of printing stars, Consumer Reports uses a circle rating scheme. Consumer Reports publishes this crash-protection rating, as well as individual NHTSA and Insurance Institute front and side crash test results, in its monthly magazine, in all of its newsstand-only new-car publications, and on its Web site. Consumer Reports magazine has about 4 million subscribers, but representatives told us they inform in excess of 13.5 million people monthly as a result of pass-along readership. The Web site has an additional 1.8 million subscribers. Published annually, The Car Book provides consumers with a broad range of information about new vehicles, listed alphabetically by model. Information such as fuel economy, repair costs, and front and side crash tests are included in the book. The Car Book takes the NCAP raw test results and converts them into a numerical rating scheme, 10 being best and 1 being worst. In addition to the information by vehicle model, The Car Book also presents detailed safety information based on the safety features of each car and the government’s rollover ratings. Since first being published privately for the 1983 vehicle model year, The Car Book has sold over 1.5 million copies. We identified four other programs that crash test vehicles and report the results to the public—the Insurance Institute for Highway Safety (Insurance Institute) program in the United States and NCAP programs in Australia, Europe, and Japan. All of the programs shared the U.S. NCAP goals of providing manufacturers with an incentive to produce safer vehicles and providing consumers with comparative safety information on the vehicles they plan to purchase. We found differences in the types of tests conducted, how the crash tests were evaluated, and how the test results were shared with the public. In addition, we found that each program had varied levels of government and industry involvement. Each of the organizations we examined conducts a variety of frontal, side, and other tests designed to measure various elements of vehicle safety. Figure 13 shows the tests performed across the U.S. NCAP and other four programs. (See appendixes II through VIII for additional discussion on each program and the tests conducted.) The five programs we examined use two crash tests to represent frontal crashes—full frontal and offset crash tests. The U.S. and Japan NCAPs conduct full frontal tests, which involve crashing the test vehicle’s entire front end into a solid barrier. The offset frontal test involves crashing the test vehicle traveling at 40 mph (64 kilometers per hour—km/h) into a deformable barrier with about 40 percent of the vehicle’s overall width on the driver’s side actually impacting the barrier, as shown in figure 14. All programs, except the U.S. NCAP, conduct the offset frontal test. Click the following link to watch a video of an offset frontal crash test conducted by Australia NCAP at 40 mph: http://www.gao.gov/media/video/d05370v4.mpg The full frontal and offset frontal tests measure different characteristics of vehicle crashworthiness. The full frontal test focuses on measuring the ability of the vehicles’ restraint systems to protect the occupants. The offset frontal test assesses a vehicle’s structural integrity and its ability to manage the crash energy generated from a crash entirely on one side of the vehicle. Officials from the programs using the offset test told us they believe it is more representative of real world crashes because most frontal crashes involve vehicles hitting only a portion of their front ends. Three types of side-impact tests are conducted among the programs we examined—the angled side test, the perpendicular side test, and the pole side test. Only the U.S. NCAP performs the angled side test. All of the other testing programs conduct a perpendicular side tests. This test involves crashing a moving deformable barrier traveling at about 31 mph (50 km/h) into a stationary vehicle at a 90 degree angle centered on the driver’s seating position. Figure 15 illustrates how the perpendicular test is performed. Click the following link to watch a video of a perpendicular side impact crash test conducted by Euro NCAP at 31 mph: http://www.gao.gov/media/video/d05370v5.mpg Other differences between the side tests were the height, shape, and weight of the barriers and the crash dummies used. For example, the U.S. NCAP and the three foreign programs performed their side tests using a moving deformable barrier with a front end simulating a passenger car, while the Insurance Institute’s barrier simulates the front end of a typical pickup truck or SUV. In addition, the Insurance Institute barrier weighs about 3,300 pounds (1,500 kilograms—kg) compared to 3,015 pounds (1,367 kg) for the U.S. barrier and 2,095 pounds (950 kg) for the Australian, European, and Japanese barriers. Also, the Australia, Europe, Japan, and U.S. side tests used 50th percentile adult male dummies and the Insurance Institute used 5th percentile adult female dummies. Insurance Institute officials told us they found that in serious real-world side-impact collisions, occupants’ heads are often struck by intruding vehicles, especially in the side collisions involving pickup trucks or SUVs with high front hoods. As a result, in 2003 when they began their side impact test, they developed the barrier to simulate these types of vehicles, while using dummies that represented smaller occupants. They said that the test challenges the automobile industry to provide additional occupant protection specifically for the head region. Figure 16 shows the difference in the size and height of the barriers, while figure 17 shows the crash test. Click the following link to watch a video of a side-impact crash test with an SUV-like barrier conducted by the Insurance Institute for Highway Safety at 31 mph: http://www.gao.gov/media/video/d05370v6.mpg The Australia NCAP and European NCAP (Euro NCAP) also include optional pole side tests. The pole side test involves a side impact to a vehicle placed on a platform and propelled at about 29 km/h (about 18 mph) into a stationary cylindrical pole. The pole test is an optional extra test, available at the manufacturer’s cost. This option is only available if a vehicle has head-protecting side air bags and receives the highest score in the side-impact test. If the vehicle performs well in the pole test, the vehicle can receive a higher overall score. Officials in Europe said this test is important, for example, because in Germany over half of the serious to fatal highway injuries occur when a vehicle crashes into a pole or a tree. The test is designed to encourage auto manufacturers to equip vehicles with head protection devices. Officials in Australia stated they are considering replacing the perpendicular side test with a pole side test to better test the increasing number of SUVs on their roadways. They said that SUVs are higher off the ground and heavier than most passenger cars. As a result, SUVs would always score higher under the current side-impact test because the barrier often impacts below the hip point on the dummy and would register little injury data. The pole test will impact all vehicles, including SUVs, the same way regardless of height and weight. NHTSA officials told us that while they have no plans at this time to include this test in NCAP, they plan to investigate revisions to the side NCAP once the pole test requirements for the Federal Motor Vehicle Safety Standards are resolved and finalized. Figure 18 illustrates how the pole test is performed. In addition to the frontal and side crash tests, other safety tests are conducted in the various programs. These include vehicle rollover, pedestrian protection, and child restraint tests. The U.S. NCAP is the only program to conduct a vehicle rollover test. Officials of the other NCAPs told us they do not conduct this test because rollover has not been a major problem in their countries due to their smaller-sized vehicle fleet. However, Australian NCAP officials told us they have noted a growth in the size of their vehicle fleets, and they will be evaluating the usefulness of adding a rollover test to their programs. The NCAPs in Australia, Europe, and Japan also conduct pedestrian tests, which are used to assess the risk to pedestrians if struck by the front of a car. The pedestrian test involves projecting adult and child-sized dummy parts (such as heads) at specified areas of the front of a vehicle to replicate a car-to-pedestrian collision. Officials in these programs said they included this test because pedestrian fatalities in some of their countries were quite high. For example, in 2003 pedestrians accounted for nearly 30 percent of the annual traffic fatalities in Japan, 20 percent in Europe (nearly 30 percent in the United Kingdom alone), and 14 percent in Australia. In contrast, in the United States, approximately 5,000 pedestrians were killed in motor vehicle crashes in 2003, accounting for 13 percent of the annual traffic fatalities. Figure 19 illustrates how the pedestrian protection test is performed. Click the following link to watch a video of a pedestrian test, where a head form is propelled into a vehicle hood, conducted by Euro NCAP: http://www.gao.gov/media/video/d05370v8.mpg The NCAPs in Europe and Japan also conduct child restraint tests to evaluate child protection, although these tests are not directly related to crashworthiness. In Europe, two different child-size dummies are placed in child seats of the auto manufacturer’s choice during the frontal and side crash tests, as shown in figure 20. In Japan, two child-size dummies are placed in child seats installed in the rear passenger seats of a test vehicle that has been stripped down to its body frame. The test vehicle is placed on a sled and subjected to a shock identical to the test speed used in the full frontal crash test. Japan NCAP also separately assesses the ease of correctly using child seats. NHTSA officials told us that the U.S. NCAP is conducting a pilot test to determine whether or not the addition of child safety seats into the frontal NCAP would provide meaningful consumer information. NHTSA also provides ratings on child safety seat ease of use. Each vehicle testing organization used crash dummy readings as a principal part of its rating process. However, we found some differences in other aspects of the organizations’ rating processes. For example, all programs except NHTSA supplement the dummy measures with inspector observations or measurements of the post-crash vehicles. In addition, in Europe and Australia, rating scores can be modified depending on the existence or absence of certain safety features. Further, each program except the Insurance Institute uses stars to convey the test results, and some programs combine individual ratings into summary ratings in an effort to make it easier for the public to understand crash test results. The four organizations we reviewed used more dummy measures in calculating a vehicle’s safety rating than U.S. NCAP. The U.S. NCAP uses head and chest crash dummy readings in frontal crashes and chest and lower spine readings for side crashes, then converts them to a probability for serious injury, which in turn is converted into a star rating. NHTSA officials said they use these measures because they are the most important indicators of serious or fatal injury in frontal and side crashes. In addition to the U.S. NCAP measures, the Insurance Institute uses measurements of the neck, left leg and foot, and right leg and foot for its frontal crash analysis and measurements of the head, neck, pelvis, and left leg for its side crash analysis. Australia and Euro NCAP use the neck, knee, femur, pelvis, and leg and foot for frontal tests and head, abdomen, and pelvis for side tests. Japan uses neck, femur, and tibia measurements for its frontal crash analysis and head, abdomen, and pelvis measurements for its side crash analysis. The other organizations use some of these additional measures to capture what in some cases may not necessarily be life-threatening injuries, such as those to the victim’s legs. As discussed earlier, the U.S. NCAP measures the impact of crashes on many of the same body regions but does not use them to calculate safety ratings. In addition to differences in the body areas being measured, some programs use different dummies in their side-impact tests. For the frontal tests, the U.S. NCAP and other organizations use dummies that represent an average-size adult male who is 5 feet 9 inches tall and weighs about 170 pounds. While this size dummy is used by most programs for the side-impact tests, there are differences in the dummy types and the instrumentation it contains. In addition, in its side-impact tests, the Insurance Institute uses a smaller female dummy (about 5 feet tall and weighing about 110 pounds). Insurance Institute officials said they chose this dummy because there is evidence that females are more at risk in side collisions. It hopes this test will encourage manufacturers to install side curtain air bags that are designed to extend low enough to protect smaller passengers. Although NHTSA’s proposed changes to the Federal Motor Vehicle Safety Standards would add a side-impact pole test using the average-size male and the smaller female dummies, NHTSA officials said that at this time they have no plans to alter the sizes or types of crash dummies they use but plan to investigate revisions to the side NCAP once the pole test requirements for the safety standards are resolved and finalized. Another distinction between the U.S. program and other programs is the use of observations to modify test results. All programs except the U.S. NCAP observe or measure changes to various parts of the occupant compartment after the frontal crash test to identify potential safety concerns. For example, the Euro NCAP measures the intrusion of the steering column and lower leg area into the occupant compartment. Euro NCAP officials noted that while an intrusion may not have affected the dummy in the test, the potential for serious injury to vehicle occupants in real-world crashes causes them to lower the safety rating. Japan’s NCAP also measures intrusion into the passenger compartment, but rather than relying on observation, Japan has established fixed measures that if exceeded will result in a lower score in a particular area. The U.S. NCAP does not use observations to modify test scores. According to a NHTSA official, these observations add subjectivity to the rating assessments and are not based on criteria that can be repeated and substantiated. Many of the automobile manufacturers we contacted stated that using observations adds a subjective element to the test that is difficult for them to replicate. Additionally, some pointed out that in some cases different inspectors could reach different conclusions. Another basic difference in scoring vehicles is the use of a modifier system in Europe and Australia. This system adjusts the score generated from the dummy injury data where injuries to occupants can be expected to be worse than indicated by the dummy readings or the vehicle deformation data alone. For example, a frontal test modifier might result in points being deducted if the dummy’s head hit the steering wheel in a vehicle without an air bag. The system in Europe and Australia also adjusts points based on the existence or absence of various safety features on the test vehicles. For example, a test vehicle can get extra points if it has a safety belt reminder system that meets their NCAP specifications. Officials said they use this approach to encourage manufactures to install new safety features sooner than might otherwise occur. Officials from several organizations and automobile manufacturers operating under the Europe and Australia programs expressed concerns that some of the modifiers might not have a direct impact on occupant safety and could artificially increase scores. They noted, for example, that in some countries safety belt usage exceeds 90 percent and that giving extra points for a feature to encourage safety belt use may not really add to safety. In addition, some automobile manufacturers identified concerns with how items included in the modifier system are developed and measured. They said that in some cases they have received just 6 months notice of changes. They said that such changes can be expensive and that they need to be notified sooner, so they have time to make changes to comply with new measures. Except for the Insurance Institute, all programs used stars to convey test results. Officials from the NCAPs noted that star ratings are well understood by the public. For example, NHTSA officials said they used focus groups in 1993 to examine various options to communicate crash test results to the public, and the five-star rating was found preferable. In addition, officials in the other programs told us they followed the U.S. NCAP’s use of star ratings. None of the programs has plans to change its rating measures. There have been some concerns expressed about the use of stars. For example, a 1996 study by the National Academy of Sciences noted that stars are inherently positive symbols and the public may not understand the distinctions between the different levels of stars. In addition, officials of a consumer group noted that most people would associate the star rating with hotels and that staying in a three-star hotel would be quite acceptable to most people. In discussing its use of Good, Acceptable, Marginal, and Poor, the Insurance Institute said it considered these types of qualitative measures as being clearer to the general public. Australia, Europe, and Japan NCAPs provide summary ratings, while the U.S. NCAP provides only individual ratings for each seating position that is included in the test for the frontal and side crash tests. For example, Australia and Euro NCAPs provide overall ratings that combine the frontal and side crash tests. Japan’s NCAP combines frontal and side crash tests to provide overall ratings for the driver and passenger of a vehicle. Australian and European officials explained that they believed potential vehicle purchasers can be confused by the large amount of detail available on the test results and that summarizing results makes the ratings more useful. They noted they make the actual injury readings available for those interested in that level of detail. In addition, while the Insurance Institute does not combine individual ratings, it does identify “Best Pick – Frontal” and “Best Pick – Side” to assist consumers. Similarly, officials with publications like Consumer Reports and The Car Book told us they have found it helpful to provide consumers with summarized rating information. NHTSA officials noted that overall or summary ratings might hide or mask deficiencies in some areas of the tests. For example, they said that if a vehicle were to get a very high frontal rating and a very low side rating, merging the results could give consumers a misleading impression of the overall safety of that vehicle. The crash testing programs we examined used a variety of approaches to share safety results with the public. Across all the programs, the Internet was the most relied-upon source for getting information to consumers, with each organization providing details of its test results. Safety pamphlets were used by all programs to supplement the safety information presented on their Web sites. Some programs also work with the news media to increase awareness of test results. Each organization made the results of its testing program available to the public on the Internet. In general, the public can access the results of individual tests, including the actual numeric dummy readings. To help the public understand these results, each Web site uses charts, tables, and graphics. For example, in addition to providing star ratings, the Euro NCAP also uses color-coded dummy injury diagrams to display how the specific body regions perform in the frontal, side, pole, and pedestrian tests. The color-coded indicators are: Good (Green), Adequate (Yellow), Marginal (Orange), Weak (Red), and Poor (Brown). The color used is based on the points awarded for that body region, as shown in figure 21. Each testing organization publishes the results of its testing programs. The U.S. NCAP publishes the Buying a Safer Car booklet, which provides new and carryover crash test ratings. The Insurance Institute publishes a Status Report newsletter about 10 times a year, which contains new crash test ratings as well as other highway safety information. It can be obtained in hard copy through subscription, as well as downloaded from the Insurance Institute’s Web site. Australia publishes a Crash Test Update brochure twice a year, which provides new crash test results. According to Euro NCAP officials, Euro NCAP divides its tests into two test phases and releases the results twice a year—in November and June. The results are also published by What Car? (a British car magazine), Which? Car (a magazine owned and produced by British consumer associations), and the General German Automobile Association (ADAC) magazine. Other consumer magazines in Europe also provide crash test information. Lastly, Japan annually publishes the Choosing a Safer Car booklet, which provides new and carryover crash test results. The Japan NCAP also publishes summary brochures of test results. Like the U.S. NCAP, the Insurance Institute and the Australia and Euro NCAPs worked with the news media to inform consumers about the results of the vehicle safety tests. For example, each program issued press releases to convey the results of safety research and crash tests. In addition, the Insurance Institute has worked with television broadcasts, such as the prime time news magazine program Dateline NBC, to raise the public’s awareness of how vehicles perform in the program’s crash tests. Insurance Institute officials grant interviews explaining the results of the tests and use broadcast-quality film and lighting to record the crash tests and make them available for television broadcasts. According to Japan NCAP officials, they work with television shows to help produce news segments that highlight changes in test procedures and recent test results. Further, according to Euro NCAP officials, in addition to other activities, Europe promotes consumer education by using crashed vehicles as public displays in prominent places in Europe during press conferences. The events are designed to attract news media and public attention in an attempt to increase public interest in and knowledge about car safety. The level of government and industry involvement varies among the crash test programs. For example, the U.S. NCAP, which is operated and funded solely by the U.S. DOT, has traditionally based its U.S. NCAP on the Federal Motor Vehicle Safety Standards as a matter of agency policy and follows an informal rulemaking process where industry and other interested parties can submit comments once NHTSA issues a notice of proposed rulemaking. The Insurance Institute, which is funded by private insurance companies, has no such process and can make an internal decision to modify tests at any time. For example, according to Insurance Institute officials, when they began their side-impact tests, they developed a crash test barrier to represent the risk of severe head injuries in side impacts by SUVs and pickups. The Insurance Institute officials said they did not involve automobile manufacturers in the decision-making process but informed them as well as NHTSA before implementing the change. The Australian NCAP was developed and is dominated mainly by private motor clubs but includes government transportation departments in six Australian states and territories, the New Zealand government, and consumer groups. The national Australian government sets minimum safety standards for vehicles but is not involved in funding or managing NCAP. Similarly, the Euro NCAP is sponsored by the governments of Great Britain, Sweden, Germany, France, and the Netherlands, as well as a number of motor clubs and consumer organizations. According to Euro NCAP officials, each sponsoring member agrees to perform or sponsor a number of crash tests and participates in making the decisions related to the program. In Australia and Europe, NCAP officials told us that by not being exclusively controlled by government, they have flexibility when modifying their programs. They said that as a result they can make changes quicker because they do not have to follow governmental procedures. According to NCAP officials, the decision processes for Australia and Europe involve the use of committees and working groups to examine issues and make recommendations for change. The automobile industry and public safety organizations may be involved in providing research or opinions, but the committees are free to make decisions they believe are appropriate. When these committees make recommendations, the full governing body votes to accept or reject the changes. The government partners have a vote in the process but cannot veto the result. In Australia, according to NCAP and government officials, automobile manufacturers were initially reluctant to engage in meaningful dialogue with the officials of the Australia NCAP. However, more recently, Australia NCAP officials have consulted with manufacturers prior to making changes in the program and have received positive responses. On the other hand, the Euro NCAP allows industry representatives to participate in the discussions of the subgroups of its two technical working groups—primary safety and secondary safety. Also, the technical working groups and automobile manufacturers engage in direct dialogue in industry liaison meetings. According to NCAP officials, Japan’s NCAP is funded by the government but administered by an independent, government-appointed committee. The committee includes members who are experts from automobile research institutes, academics, journalists, and representatives of the Japanese automobile industry and the automobile importers association. This government/industry committee manages the program and must approve changes submitted by program officials. The committee reaches its decisions through consensus. Although the government ministry that oversees the program may override the committee’s decisions, this has never occurred. NCAP has been successful in encouraging manufacturers to produce safer vehicles and providing consumers with comparative safety information. However, the program is at a crossroads where it will need to change to maintain its relevance. The usefulness of the current tests has been eroded by changes in the vehicle fleet that have occurred since the program began. Today there are many more large pickups, minivans, and SUVs than existed 27 years ago and new safety hazards have resulted from the incompatibility between large and small vehicles and rollover crashes, which are not fully addressed by current NCAP tests. In addition, because most vehicles now receive four- or five-star ratings, the NCAP tests provide little incentive for automakers to continue to improve vehicle safety and little differentiation among vehicle ratings for consumers. Lastly, NHTSA is upgrading its frontal and side crash tests in the Federal Motor Vehicle Safety Standards, which will make current NCAP tests less meaningful. Opportunities to enhance the program include developing approaches to better measure the effects of crashes between large and small vehicles and occupant protection in rollovers, rating technologies that help prevent crashes from occurring, and using different measures to rate the crash results. NHTSA also has opportunities to enhance the presentation and timeliness of information provided to consumers. NCAP testing has contributed to more crashworthy passenger vehicles and NHTSA has informed the public of test results. As shown in figure 22, there has been a substantial increase in the average star rating of vehicles since testing began. In 2004, tested vehicles averaged about 4.6 stars for the driver in frontal crash tests, about 4.4 stars for the passenger in frontal crash tests, about 4.4 stars for the driver in side crash tests, and about 4.3 stars for the rear passenger in side crash tests. The improved ratings indicate that manufacturers have taken NCAP seriously and designed and built vehicles that do well on NCAP tests. Automakers told us that vehicle safety and NCAP test results have become an important marketing tool. As a result, many auto manufacturers advertise five-star ratings in government crash tests in their television, radio, and print ads. NHTSA has informed the public of the NCAP test results through its Web site and by publishing a safety brochure. In addition, according to NHTSA officials, the NCAP Web site has been redesigned in an effort to make it more user-friendly. More importantly, NCAP crash test results are used by popular publications that influence large segments of the car-buying public. Both Consumer Reports and The Car Book use NCAP test results as part of their vehicle safety ratings. While NCAP has been successful in encouraging manufacturers to make safer vehicles, it will need to change to remain relevant. There have been significant changes in the makeup of the nation’s vehicle fleet, a growing similarity of crash test ratings, and upgrades in the safety standard tests for frontal and side crashworthiness. Without addressing these changes, NCAP provides little incentive to manufacturers to continue to improve safety and may provide consumers with only limited comparative information on vehicle safety. Since NHTSA began NCAP testing in 1979, there have been dramatic changes in the vehicle fleet. Vehicles such as pickups, minivans, and SUVs have transformed the fleet once dominated by passenger cars. There are now more than 85 million pickups, minivans, and SUVs on the road, representing about 37 percent of the vehicle fleet. The change in vehicle fleet presents new safety challenges that NCAP’s testing does not fully address—vehicle incompatibility and rollover. The issue of incompatibility emerges when a large vehicle such as a pickup, minivan, or SUV crashes into a smaller, lighter vehicle because the larger vehicle can inflict serious damage that is particularly dangerous to the occupants of the smaller vehicle. The current NCAP frontal and side tests do not account for vehicles of different size, weight, and geometry crashing into one another. Significant differences in ratings can result when tests are designed to address these vehicle differences, as evidenced by comparing the Insurance Institute side test results with NCAP results. The Insurance Institute, which uses a higher SUV-like barrier, gave 27 vehicles its lowest rating (Poor) in side-impact tests, primarily because there were no side air bags in the vehicle. NHTSA, which uses a low barrier and, unlike the Insurance Institute, does not include head measures in its star calculations, gave 21 of these same 27 vehicles (77 percent) four- or five-star safety ratings. Also, with the increase in pickups, minivans, and SUVs in the nation’s fleet, vehicle rollover has become a more important issue; in 2003, rollovers accounted for over 10,000 fatalities, or more than 30 percent of all passenger vehicle occupant fatalities. However, the NCAP rollover test only measures the likelihood that a vehicle will roll over and does not assess the safety afforded to occupants should a rollover occur. NCAP frontal and side crash test results have improved to a point where there is little difference among most vehicles’ ratings. In 2004, NHTSA provided the public with NCAP rating information for 234 vehicles. Most of these vehicle ratings were four or five stars for drivers and passengers in frontal and side crash tests, as shown in figure 23. The vehicles crash tested more recently have done even better. Of the 49 frontal and 18 side crash tests conducted in 2004, over 95 percent received a four- or five-star rating. As a result, NCAP’s ability to challenge auto manufacturers to continue improving vehicle safety has eroded. Also, with almost all scores being about the same, consumers do not have comparative safety information that differentiates significantly among vehicles. Lastly, NHTSA is upgrading the frontal and side tests under the Federal Motor Vehicle Safety Standards, which make current NCAP testing less meaningful. For frontal tests, safety standards will require that for vehicles built after September 1, 2007, manufacturers must certify the crashworthiness of their vehicles at 35 mph (instead of the current 30 mph). This change will eliminate the speed difference between the frontal NCAP and the frontal belted safety standard tests. Because of this change, NHTSA has begun to examine alternatives to its current frontal crash test program and hopes to finalize any changes to the NCAP frontal test in 2006. Similarly, NHTSA announced in May 2004 that it is proposing to add a 20 mph side pole crash test to the Federal Motor Vehicle Safety Standards. This test will use a more technically advanced average-size male dummy than is currently used in the NCAP tests and a dummy that represents a small female. According to NHTSA officials, the new test and advanced dummy will enable them to confidently measure compliance with head injury standards and challenge automakers to provide adequate head protection to vehicle occupants in side impact crashes. However, neither this test nor the new dummies are currently part of NCAP. NHTSA officials said they plan to begin examining alternatives to the side crash test at the end of 2005. NHTSA could explore several opportunities to enhance NCAP and ensure its relevance. These opportunities include (1) addressing changes to the vehicle fleet, particularly as it relates to vehicle incompatibility and rollover; (2) developing approaches for NCAP to encourage improved safety from emerging technology that helps drivers avoid crashes; and (3) examining the various testing procedures and measures that are available and in use by other organizations and determining their applicability to NCAP. When pickups, minivans, and SUVs collide with smaller passenger cars, the mismatch of the vehicles’ weight, height, and geometry are considerable, as shown in figure 24. In terms of the weight differences, subcompact cars may weigh as little as 1,500 pounds while the large SUV may exceed 6,000 pounds. Because of the higher ground clearance of large pickups and SUVs, their bumpers may skip over the crash structures of passenger cars, raising the likelihood that an occupant of the car will be killed or seriously injured. A 2003 NHTSA study found that in frontal collisions involving a car and a light truck or van, there were almost four times the number of fatalities in the car than in the light truck or van. The success of NCAP and the other testing programs may have indirectly contributed to this problem. According to some experts, to improve crashworthiness scores of large vehicles, vehicle manufacturers have increased the rigidity of the structure that absorbs and manages the substantial forces in the crash tests. As a result, the structure of large vehicles has had to become more substantial and stiffer than that of smaller vehicles because the larger vehicles must absorb more energy in the crash test due to their greater weight. NHTSA’s NCAP frontal tests could potentially be modified to measure and rate vehicle incompatibility. Some experts, NHTSA officials, and vehicle manufacturers told us that there are a number of approaches being investigated that could help to address vehicle incompatibility. For example, some researchers are examining the use of sensors in test walls; crashing a moving deformable barrier into the front of the test vehicles, instead of propelling the test vehicle into a solid wall; or crashing test vehicles into a solid wall at varying speeds, depending on the size of the vehicle, to equate the crash to hitting a standardized vehicle. The hypothesis is that information obtained by measuring how vehicles strike the crash test barrier could be used to estimate the relative damage that a vehicle would cause in collisions with another vehicle and could be used to rate the aggressiveness of vehicles. Using a moving barrier for frontal crash tests would make test results comparable across weight classes, as is the case with the current side-impact rating, because all vehicles would be struck by the same size barrier. Using variable speeds based on vehicle weight would also allow ratings of small and large vehicles to be compared. Each of these alternatives requires further development and testing to assess the overall safety implications, including the potential for reducing fatalities in passenger cars when struck by larger vehicles, the potential for diminished occupant protection for large vehicles in single vehicle crashes, and consideration of potential costs. Ratings based on these tests could provide manufacturers with incentives to address incompatibilities between large and small vehicles and provide consumers with information on the potential safety hazards associated with vehicle incompatibility. The problem of vehicle incompatibility is even worse in side crashes. When a large vehicle like an SUV crashes into the side of a small vehicle, the larger vehicle may miss the door sill of the vehicle, causing most of the energy to be directed to the door and window areas, as shown in figure 25. In such cases, the injuries can be exacerbated when there is no side head protection, leaving the window as the only barrier between the occupant’s head and the impacting vehicle. Head injuries are a major cause of fatalities in side collisions, particularly in crashes where a single vehicle strikes a tree or utility pole and in intersection crashes where smaller, lighter vehicles are hit in the side by larger, heavier vehicles. NHTSA has estimated that in serious side-impact crashes involving one or more fatalities in 2002, nearly 60 percent of those killed suffered brain injuries. There are also possibilities for modifying the NCAP side test to help address vehicle incompatibility. For example, NHTSA could examine the barrier that is being used to ensure that it best represents today’s vehicles. NHTSA’s current side-impact barrier is about the size and weight of a compact car. As a result, when this barrier hits the test vehicle, it will almost always hit the bottom sill of the door, which is designed to manage much of the crash energy. To address the disparity in height between passenger cars and SUVs, the Insurance Institute uses a side-impact test barrier that is larger and higher than NCAP’s barrier, as shown in figure 26. According to Insurance Institute officials, they designed this barrier to represent an SUV so their test could more accurately reflect the increased risk for occupants in smaller vehicles. They said that it has encouraged manufacturers to install side curtain air bags. Using this higher barrier has resulted in different scores than NHTSA’s NCAP. For example, the Insurance Institute has given 27 vehicles its lowest rating (Poor) in side- impact tests, while NHTSA, which uses a low barrier and does not include head measures in its star calculations, gave 21 of these 27 vehicles (77 percent) four- or five-star safety ratings. Click the following link to watch a video of an interior view of the side impact crash test with an SUV-like barrier conducted by the Insurance Institute for Highway Safety at 31 mph: http://www.gao.gov/media/video/d05370v9.mpg Officials from a number of automobile makers told us that vehicle compatibility is an important safety issue, and they are working to enhance occupant protection in front and side crashes, outside of NHTSA safety standards or NCAP testing. Several automakers voluntarily entered into an agreement with the Insurance Institute to work collaboratively to have all of their vehicles meet new safety criteria that require large vehicles to match the height of the fronts of small vehicles by September 2009, as shown in figure 27. According to Alliance of Automobile Manufacturers members, better matching of structural components may enhance the ability to absorb crash forces, thereby reducing occupant fatalities by an estimated 16 to 28 percent. The agreement also specified that by September 1, 2007, at least 50 percent of these automakers’ vehicles offered in the United States will meet enhanced side-impact protection with features such as side air bags, air curtain bags, and revised side-impact structures. By September 2009 all vehicles of these manufacturers are to meet the new side criteria. In commenting on a draft of this report, NHTSA officials noted that in order for 50 percent of the vehicles to meet the voluntary side requirements by September 1, 2007, manufacturers can certify by using either the existing Federal Motor Vehicle Safety Standard pole test or the Insurance Institutes side impact test. They noted that in September 2009, the pole test will no longer be an option and that, therefore, it is very possible that large vehicles, such as pickups, minivans, and SUVs, would be able to pass the test without incorporation of enhanced side-impact features such as side air bags or curtains for the following reasons: Manufacturers may not need to subject large vehicles to the pole test by September 1, 2007, if 50 percent of its fleet is comprised of smaller passenger cars. Larger vehicles will sustain a lower velocity change than smaller vehicles when struck by the Insurance Institute barrier. The higher ride height of large vehicles could keep the dummy’s head from striking the top of the Insurance Institute barrier. Given the changes in the vehicle fleet, fatalities due to rollover crashes have continued to increase. Rollovers are dangerous incidents and have a higher fatality rate than other kinds of crashes. Just over 2 percent of all police-reported crashes that occurred in 2003 were rollovers, but they accounted for over 10,000 highway fatalities, or more than 30 percent of all passenger vehicle occupant deaths. All types of vehicles can roll over. However, taller, narrower vehicles such as pickups, minivans, and SUVs have higher centers of gravity and thus are more susceptible to roll over if involved in a single-vehicle crash. NHTSA reported that 61 percent of fatalities in SUVs and 45 percent of fatalities in pickups in 2002 were the results of rollover crashes. NCAP’s rollover testing does not rate the chance of a potentially life-threatening injury should a rollover crash occur; it only measures the risk of rollover. Although NHTSA has not incorporated occupant protection in rollovers into NCAP, officials said they have been examining occupant protection in rollover crashes, focusing on reducing occupant ejection and increasing roof strength through regulation. According to NHTSA officials, the most deadly rollovers occur when unbelted occupants are completely ejected from the vehicle though doors, windows, and sun roofs and when the roof crushes into the occupant compartment, causing serious, if not deadly, head, neck, and spinal cord injuries. NHTSA has proposed changes to the Federal Motor Vehicle Safety Standards that would upgrade the door lock requirements to help prevent vehicle occupant ejection and increase roof strength. They are also considering other ways to prevent ejection, specifically looking at the potential of side curtain air bags to prevent ejection through vehicle windows. NHTSA’s NCAP rollover testing could be modified to better measure and rate the risks of serious injury associated with a rollover crash. NHTSA officials and others said that they have not been able to develop a repeatable crash test in which the vehicle rolls over and dummies would be used to measure injuries. However, in the absence of such a rollover crash test, NCAP could examine various aspects of the vehicle which are known to affect occupant safety in rollover, such as rating the roof strength of vehicles. For example, officials from a consumer group told us that NHTSA could conduct dynamic tests on roof strength and point to a 2002 Society of Automotive Engineers paper that attests that such drop tests for roof strength are repeatable. They also said that there has been other promising research that would measure roof crush in dynamic tests. However, including such tests in NCAP would require further development and funding considerations. NCAP also has an opportunity to begin assessing new technology that could help prevent crashes. Vehicle manufacturers and others have been developing and testing new active safety systems that hold promise for reducing traffic fatalities by helping drivers avoid crashes altogether. These active safety systems include improving vehicle handling and braking in emergency situations, providing warning alerts for potential collisions or straying out of roadway lanes, and providing distance alerts when driving too close to another vehicle. A 2004 NHTSA study estimated that the incorporation of electronic stability control systems could reduce certain crashes by about 67 percent. Similarly, the Insurance Institute reported that electronic stability control can reduce the risk of involvement in single vehicle crashes by more than 50 percent. Some experts suggested that NCAP might be used to encourage and speed the adoption of active safety systems into the vehicle fleet. Some elements of active safety systems are included in some current tests. While the rollover test is not designed to measure the effectiveness of electronic stability control systems, vehicles equipped with this technology would be expected to perform better in the rollover test because the vehicle would be less likely to tip up. In addition, brake tests are conducted as part of Japan’s NCAP, with the results provided as a separate safety rating. The Euro NCAP has also established committees to identify potential active safety systems to include in their program, as well as the testing protocols that would be used. While using NCAP to further test and rate active safety systems could encourage their adoption in the marketplace, there are challenges to overcome. According to NHTSA officials, NHTSA would first need to identify those active safety systems that could be effective in preventing crashes. They said this would be difficult because they would have to determine how well a system helps drivers avoid crashes. Also, determining the testing methodology would be challenging because the effectiveness of some active systems could be affected by factors such as driver behavior and the physical characteristics of the road, such as the dampness of the pavement. Officials from various automobile manufacturers told us that they are developing many new active safety systems with the objective of helping drivers avoid crashes. They pointed out that while NCAP could be used to encourage them to market such systems, they would have concerns regarding which systems to include in NHTSA’s program and how the system would be rated. In addition, they noted that because of competitive forces, active safety advances could be available sooner than NHTSA is capable of deciding to include them and developing an acceptable approach for testing and rating them. Officials from automakers said they are willing to share their research and work in cooperation with NHTSA to develop tests or measurements that could help NCAP address these issues. NHTSA could provide consumers with more safety information by using additional test measures and different crash dummies. All of the other organizations we contacted used more dummy measures to calculate vehicles’ safety ratings than U.S. NCAP used. To determine the star ratings, NHTSA uses head and chest readings from the frontal NCAP test and chest and lower spine readings for side-impact tests. Other organizations use measurements that included such areas as the head, neck, chest, leg, and foot for frontal test ratings and the head, neck, chest, pelvis, and leg for side test ratings. The concern with using few dummy readings is that the safety rating might not include important safety considerations. While NHTSA uses head and chest readings for frontal ratings and chest and lower spine readings for the side ratings, it measures other items during crash tests and may identify them as “Safety Concerns” on its Web site if they exceed certain values. We identified over 140 Safety Concerns on NHTSA’s Web site since vehicle model year 1990—36 of these were for vehicles that received four- or five-star ratings. The Safety Concerns included high femur readings in frontal crashes, which could mean there was a high likelihood of thigh injury; high head acceleration readings in side crashes, which could indicate a high likelihood of serious head trauma; and doors opening during side crash tests, which could increase the likelihood of occupant ejection. Having a Safety Concern noted for vehicles with a four- or five-star rating presents conflicting information that could be confusing to consumers. As NHTSA makes changes to its testing program, it has the opportunity to reexamine the size and type of dummies it uses in crashes in addition to the body areas of the dummies being measured. At present, NHTSA’s dummies equate to an average-size adult male who is about 5 feet 9 inches tall and weighs about 170 pounds. Most of the other organizations use this size dummy in their crash tests, and vehicle manufacturers work to maximize the safety systems for an occupant with these characteristics. However, not all vehicle occupants are the same size, and optimizing the restraint system for the average male would not necessarily be optimum for others who may be smaller, shorter, taller, or heavier. Also, children and the elderly may react differently to crash forces than the average-size male. Recognizing this, the Insurance Institute uses a smaller female dummy (about 5 feet tall and weighing about 110 pounds) in the driver and rear seat of the side- impact test. Insurance Institute officials said they made this change to encourage manufacturers to install side curtain air bags that would extend low enough to protect the heads of smaller passengers. In addition, in its proposed side-impact pole standards test, NHTSA specifies using a 50th percentile male and a 5th percentile female to address the issue of different size drivers and passengers. The U.S. NCAP officials said that at this time they are waiting on the resolution to the proposed safety standard changes that would add a side pole test before deciding on altering the size or type of crash dummies they use. While generating additional information on which to base safety ratings, altering the size of the dummy in the NCAP tests could provide challenges for automobile manufacturers because they would have to conduct more internal tests. Officials from many vehicle manufacturers said they must already conduct hundreds of crash tests each year to ensure that they meet the variety of tests and dummies used in NHTSA’s standards, U.S. NCAP, and tests conducted by the other testing organizations. NCAP has the opportunity to enhance its program by changing the way it reports test results. Specifically, it could provide summary ratings, present information in a comparative manner, increase public awareness, and make results available earlier in the model year. According to some safety experts, NHTSA could improve its program by developing an overall safety rating rather than reporting four separate ratings for crash tests. Consumer Reports, The Car Book, the Insurance Institute, and all of the other NCAPs provide more summary information for consumers than NHTSA. Further, a 1996 National Academy of Sciences study that examined NCAP recommended that NHTSA provide an overall rating to provide consumers with an overview of a vehicle’s safety. However, the study also recommended that NHTSA make the detailed test results available for those consumers who wish to examine them more fully. NHTSA and Insurance Institute officials said they did not develop an overall crashworthiness rating because combining ratings are technically difficult and could obscure low ratings in one test area that would be revealed when test results are reported separately. Insurance Institute officials added that consumers can evaluate the different ratings to determine those that are most applicable to their situations. They said a person who is primarily the sole occupant of a vehicle might not be as concerned with the passenger safety rating as someone who routinely carries passengers. NHTSA officials said that they will continue investigating the feasibility of creating an overall safety rating for vehicles. However, they said that they would like to incorporate additional elements into such a rating. For example, they said that it is important to develop a rating that considers more than just the frontal and side-impact test results, such as the rollover results and vehicle compatibility, which can have a large bearing on the overall safety of vehicles. In their view, without the elements that address rollover and compatibility, consumers might get the wrong impression of the relative safety of vehicles. Officials said they have not yet developed a method to incorporate the rollover rating into an overall rating and have not identified measures to reflect vehicle compatibility, although they have long recognized compatibility as an issue. They could not estimate how long it would take to address the problem of adding the rollover rating to a combined rating but said they would pursue developing a summary safety rating for vehicles after they decide how to measure vehicle compatibility. Each testing organization uses a different presentation approach for reporting its test results, with some providing additional information to the public. The U.S. NCAP provides separate star ratings for the four dummy positions in the two crash tests and the rollover test. The only ratings the U.S. NCAP presents in a comparative manner are the rollover ratings, which compare vehicle performance within a class of vehicles, such as pickup trucks. In contrast, Australia’s and Japan’s NCAPs provide more comparative information by supplementing their star ratings by adding bar charts that show how well the vehicle performed in the tests, as shown in figures 28 and 29. The Australia publication shows that although two vehicles received three stars, one of them performed better than the other. The Japan NCAP rating shows that the vehicle received five stars for overall driver safety but that the passenger score was higher than that of the driver. (Goal average 80.5%) (Goal average 84.8%) Similarly, Consumer Reports provides summary safety ratings for accident avoidance and crash protection and uses a bar chart to present its overall safety score. Consumer Reports also lists vehicles in ranked order rather than alphabetically, provides comments to highlight particular aspects of each vehicle’s performance, and uses qualitative descriptions--Excellent, Very Good, Good, Fair, and Poor--to help inform its readers of safety results. Consumer Reports officials said that the overall rating provides an overview of the vehicle’s safety, and the two summary categories of accident avoidance and crash protection provide additional information that consumers may want. NHTSA recently began using a rating system for its rollover assessment that indicates, along with the star rating, the percentage of likelihood that a vehicle may roll over. NHTSA’s rollover information provides an extra level of detail of vehicle performance than the information provided for the frontal and side collision tests. The rollover results are ranked according to performance and, as illustrated in figure 30, show how well each vehicle performed within the range of performance of its vehicle class, such as passenger cars, pickups, vans, and SUVs. NHTSA could look to other programs for innovative ways to garner more interest in crash test results. Like other testing organizations, NHTSA uses the Internet, brochures, and press releases to inform the public of NCAP ratings. However, other organizations use additional approaches to inform the public of their program and test results. For example, the Japan Automobile Federation creates public awareness of the program with a portable sled in which the general public can experience a simulated collision at 5 kilometers per hour and have a protective air bag deploy. The Euro NCAP also stages a public display of crash vehicles. They try to select areas where media and public interest would be high. Recent events were held in Wenceslas Square, Prague; Athens; and London. Figure 31 shows two events, one in London and another in Prague. There have also been proposals to increase public awareness of NCAP results by requiring their inclusion on new car stickers. For example, S. 1072, a bill introduced in the 108th Congress to reauthorize funds for federal aid highways, highway safety programs, and transit programs, included a provision that would require automakers to include NCAP test results on new car stickers. Officials from consumer advocate groups told us that they support such an approach because consumers would have information available at the time of their purchase decisions. Officials from automakers said that there are a number of challenges that would need to be overcome if such an approach were taken, including scheduling tests to ensure that results are available in time for the information to be included on new car stickers. NHTSA could conduct vehicle tests earlier and release NCAP ratings sooner in the model year, which would make the results more useful for consumers. NCAP ratings are often released late in the model year, after many of the vehicles have already been purchased. In May 2003, long after the beginning of model year 2003, NHTSA released the results of some model year 2002 vehicle tests. NHTSA published its Buying a Safer Car brochure for 2004 in February 2004, about 6 months after the vehicles were available for sale and before all of the tests were completed for the 2004 models. To the extent that test results are available sooner, more car buyers could have safety information to help make their purchase decisions. For example, by the time NHTSA released the Buying a Safer Car brochure in February 2004, according to industry sales statistics, about 7.7 million, or over 46 percent, of new cars and trucks had been purchased in the United States. For model year 2005, NHTSA attempted to address the issue of getting timely information to consumers by publishing an early edition of its Buying a Safer Car brochure in December 2004. This publication included test results for some 2005 models. In addition, towards of the end 2004, NHTSA began posting the results to its Web site as soon as the quality control process was completed. NHTSA officials plan to publish an updated version in spring 2005, after additional testing has been completed. There are several factors that affect the timing of the testing and the release of NCAP ratings. First, NHTSA obtains vehicles for NCAP testing directly from the dealerships and leasing companies to ensure that each vehicle is representative of that make and model. Under this approach, testing cannot begin until after vehicles are available for purchase by the public—the model year begins in September for many companies. In addition, NHTSA does not receive its funding until after the fiscal year begins on October 1st of each year. Further, due to the number of vehicles to be included, vehicle testing is spread out over a period of months. As a result, testing can extend from October though April. Until recently, NHTSA did not make ratings available to the public as soon as the results were known but waited until all testing of a vehicle category was finished before issuing a press release announcing the test results. Beginning with model year 2005 tests, NHTSA began posting the test results to its Web site after the quality control process was complete. Press releases continue to be generated after each batch of tests is completed. NHTSA officials said that by releasing the results this way, consumers have comparative information on all vehicles of one type at the same time. One testing organization has addressed some of the timeliness issues. Euro NCAP obtains some vehicles directly from the manufacturers prior to distribution to dealerships. This enables them to begin testing before the vehicles are available to the public. In addition, the Euro NCAP divides its program into two testing and information releases each year—one in November and one in June—to speed the information to the public. While NHTSA’s New Car Assessment Program has contributed to making safer vehicles, it is at a crossroads where it will need to change to remain relevant. The usefulness of the current testing has been eroded by changes in the vehicle fleet that have occurred since the program began. The growing number of large pickups, minivans, and SUVs in the nation’s vehicle fleet is creating different safety risks, particularly with regard to the incompatibility of large and small vehicles and vehicle rollover, which NCAP does not fully address. In addition, the very success of the program has brought it to a point where it is not clear that the program’s goals can continue to be met. Because almost all vehicles today receive four- and five-star frontal and side-impact safety ratings, NCAP provides little incentive for manufacturers to further improve the safety of their vehicles and does not provide consumers with information that differentiates the safety of one vehicle compared to another. Further, the planned changes to the safety standards for frontal and side crashworthiness may make current NCAP tests less meaningful. While we believe there are opportunities to enhance NCAP by developing approaches to better measure the interaction of large and small vehicles and occupant protection in rollovers, rating technologies that help prevent crashes from occurring, and using different injury measures to rate the crash results, there are challenges that must be considered and addressed before changes can be implemented. However, without changing its testing, NCAP provides little incentive for manufacturers to improve vehicle safety. In addition, NHTSA will need to enhance the timeliness of testing and presentation of the New Car Assessment Program information. For example, by the time NHTSA finished its testing and published the test results for model year 2004 vehicles, about 7.7 million, or over 46 percent of new vehicles had already been purchased. To enhance the information available to consumers, NHTSA can provide summary ratings, present information in a comparative manner, increase public awareness, and conduct tests earlier in the car model year. Given the substantial numbers of traffic deaths and injuries suffered on the nation’s roads each year, efforts to improve vehicle safety seem warranted. We recommend that the Secretary of Transportation direct the Administrator, National Highway Traffic Safety Administration, to examine the future direction of the New Car Assessment Program to maximize its value in providing an incentive for manufacturers to improve vehicle safety and informing the public about the relative safety of vehicles. identifying and evaluating NCAP tests that should help prevent fatalities on the nation’s roadways, which should include developing measures for rating vehicle incompatibility in front and side-impact tests and occupant protection in rollover crashes; developing approaches to incorporate active safety systems ratings as a part of NCAP; and analyzing alternative testing methodologies and dummies to provide a robust and accurate measure of the likelihood of serious injuries to a wide range of vehicle occupants. In addition, we recommend that steps be taken to provide the public with improved NCAP safety information in a more timely manner. In doing so it may be necessary to examine how other organizations inform the public and develop summary ratings, whether vehicles could be obtained more efficiently for testing, how budgeted funds are managed during the year, and how efficiently NCAP times the crash tests conducted by its contractors. We provided a copy of the draft report to the Department of Transportation for its review and comment. In commenting on the report, the Senior Associate Administrator for Vehicle Safety commented that NHTSA was pleased that the report concluded that NCAP has been successful in encouraging manufacturers to make safer vehicles and providing vehicle safety information to consumers. While NHTSA generally agreed with the report findings, including recognition that there are opportunities to enhance NCAP, the official emphasized that NCAP was just one of the many interrelated methods, including Federal Motor Vehicle Safety Standards and traffic injury control programs, the agency uses to achieve its mission of saving lives, preventing injuries, and reducing vehicle-related crashes. The official said that NHTSA has been consistently working to address the challenges associated with enhancing this complex technical program while ensuring that the testing and results reported to consumers are accurate and reliable. The official explained that this requires NHTSA to ensure that any changes to NCAP, or for that matter to the Federal Motor Vehicle Safety Standards, are based on sound science and careful analysis of supporting data. The official cited a number of recent efforts that NHTSA said demonstrate the careful and systematic approach the agency uses when considering changes to the program. These include pilot studies with child restraint systems to determine the feasibility of incorporating them into NCAP, seeking public comments for revising frontal NCAP collision testing, and working to ensure that advanced safety technologies are publicized so that consumers can factor them into the vehicle purchase decision-making process. The NHTSA official also said that the agency recognizes that vehicle rollover and compatibility issues cause a significant portion of the fatal and serious motor vehicle occupant injuries on our nation’s highways, and NHTSA has made these areas two of its highest priorities. In June 2003, NHTSA published initiatives for public comment to address both of these areas. The NHTSA official said the agency is continuing its efforts to identify effective vehicle metrics and countermeasures to address these issues, since they are necessary in order for NCAP to provide meaningful consumer information that can be linked to safety improvements in the vehicle. We recognize that NCAP is one of a number of efforts that NHTSA uses in an attempt to reduce highway crashes, serious injuries, and fatalities. In addition, we support NHTSA’s view that changes to the NCAP program should be based on sound science and careful analysis of supporting data. We encourage NHTSA to take timely action to address the issues raised in this report. NCAP has helped make vehicles safer, but there are opportunities to improve the program and ultimately help save more lives. The risks associated with vehicle incompatibility and rollover and the potential benefits to be gained from active safety systems heighten the importance of addressing these issues as promptly as possible. In addition, analyzing alternative testing methodologies and dummies could lead to more robust and accurate measures of the likelihood of serious injury to a wide range of vehicle occupants. Lastly, NHTSA has the opportunity to improve the timeliness and presentation of the NCAP results, which could help consumers make informed decisions when they purchase cars. NHTSA also provided technical clarifications to our report, which we incorporated as appropriate. We are sending copies of this report to appropriate congressional committees and the Secretary of Transportation. We will also make copies available to others upon request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. We are also making available a version of this report that includes video clips of some of the crash tests conducted by NHTSA and others. If you or your staffs have any questions regarding the contents of this report, please contact me at (202) 512-2834 or [email protected]. Individuals making key contributions to this report are listed in appendix IX. To determine how NHTSA’s New Car Assessment Program tests vehicles, rates their safety, and reports the results to the public, we reviewed Federal Motor Vehicle Safety Standards (CFR Title 49: Chapter V, Part 571); the Motor Vehicle Information and Cost Savings Act of 1972 (Public Law 92- 513); the Transportation, Recall Enhancement, Accountability and Documentation (TREAD) Act; and other documents pertaining to NCAP regulations. We also searched NHTSA’s docket and NCAP documentation. In addition, we conducted interviews with NHTSA officials responsible for operating the Federal Motor Vehicle Safety Standards regulatory program and the New Car Assessment Program. We visited and interviewed officials from the Federal Highway Administration and the National Crash Analysis Center. During visits to all five of the contractors that perform regulatory and NCAP crash tests-- including Karco Engineering, LLC, in Adelanto, California; MGA Research Corporation in Burlington, Wisconsin; Medical College of Wisconsin in Milwaukee, Wisconsin; General Dynamics— Advanced Information Systems in Buffalo, New York; and the Transportation Research Center, Inc., in East Liberty, Ohio--we interviewed officials and engineers performing tests and observed various crash tests. We documented the procedures for obtaining the data, how results were recorded, and the conversion into star ratings. We determined that NCAP data were sufficiently reliable for the purpose of this report. In addition, we reviewed literature pertaining to vehicle safety issues and documents published by the Transportation Research Board. To compare NHTSA’s New Car Assessment Program with other programs that test vehicles and report vehicle safety results to the public, we researched literature and interviewed NHTSA officials to identify three foreign New Car Assessment Programs (in Australia, Europe, and Japan) and the Insurance Institute for Highway Safety as a domestic program. We also identified publishers of Consumer Reports and The Car Book as organizations that used NHTSA’s NCAP data to derive their own vehicle safety ratings. We identified a program in Korea but did not include this program in our review because it began operating in 1999 and had not tested a significant number of vehicles. We obtained information on these programs by reviewing their literature and their Web sites. We also interviewed officials and visited the test facilities of the Insurance Institute and the NCAPs in Australia, Europe, and Japan. We visited the Insurance Institute of Highway Safety’s Vehicle Research Center and observed a crash test. We also examined international crash test and rating programs, including the Australia, Euro, and Japan NCAPs. For Australia’s NCAP, we visited Australia and conducted interviews with government officials associated with the respective New Car Assessment Program and vehicle safety policy. For Euro NCAP, we visited Belgium, Germany, Sweden, and the United Kingdom, where we conducted interviews with the European Commission, and the government officials associated with the respective New Car Assessment Programs and vehicle safety policies in Germany, Sweden, and the United Kingdom. For Japan’s NCAP, we visited Japan and interviewed government officials associated with the respective New Car Assessment Program and vehicle safety policy. While in these countries, we also interviewed auto associations, consumer advocacy groups, and vehicle safety experts. We identified and selected these auto associations, consumer advocacy groups, and vehicle safety experts by reviewing studies and conference papers, talking to program officials and other experts, and reviewing materials on Web sites. We interviewed auto manufacturers in these countries, including BMW, Honda, Mercedes, Nissan, Toyota, and Volvo. We reviewed New Car Assessment Program regulations, testing protocols, and program documentation. See table 1 for a list of domestic and international organizations contacted. To determine whether opportunities exist for NCAP to enhance its vehicle safety testing and reporting, we obtained views from experts in vehicle safety and the auto and insurance industries. In selecting vehicle safety experts, we examined studies and conference papers, considered referrals from other experts, and consulted the National Academy of Sciences. We interviewed officials of the Association for the Advancement of Automotive Medicine and Applied Research Associates. We visited and interviewed automobile manufacturers in the United States, including General Motors, Ford Motor Company, Daimler-Chrysler, and American Honda Motor Company. We interviewed trade associations including the Alliance of Automobile Manufacturers and the Association of International Automobile Manufacturers. We interviewed consumer advocacy groups, including Consumers Union, Public Citizen, the AAA Foundation for Traffic Safety, Advocates for Highway Safety and Auto Safety, and the National Safety Council. We reviewed relevant research on consumer information regarding vehicle safety from the Transportation Research Board. We conducted our work from March 2004 through April 2005 in accordance with generally accepted government auditing standards. To rate a vehicle’s crashworthiness, NHTSA combines information about (1) the forces that would injure a human during a crash and (2) the effects of those forces on areas of the human body. The forces that would injure a human during a crash are measured by anthropomorphic test devices, commonly referred to as crash test dummies, which serve as proxies for human vehicle occupants. These dummies are fitted with accelerometers and load sensors that measure the forces of impact on particular areas of the body, as shown in figure 32. Because the current dummy technology has yet to replicate a human with the same biological matter or physiology, dummies cannot exhibit injuries following a crash as a human would. Therefore, the effects of the forces on particular areas of the human body, as measured by the dummies, have been developed by researchers who have gathered information by applying varying forces to biological specimens and by using a scale developed by the Association for the Advancement of Automotive Medicine (AAAM). This scale, the Abbreviated Injury Scale (AIS), ranks injuries, from minor through currently untreatable, for particular areas of the body and assigns a number from 1 through 6 to each rank, as shown in table 2. The AIS is used to provide a simple numerical method for ranking and comparing injuries by severity. AIS values in NCAP are injury probability values derived from measurements of dummy response taken from specific characteristics (e.g., size, shape, mass, stiffness, or energy dissipation) that simulate corresponding human responses (e.g., acceleration, velocity, or articulation). These dummy responses are correlated with both experimental biomechanical research as well as with real world crash injury investigation. Researchers have used a statistical procedure to relate the levels of injury to the forces that caused them. This procedure produces theoretical injury curves, which NHTSA uses as the basis for safety ratings. NHTSA develops crashworthiness ratings, expressed in stars, for both frontal and side crashes. To develop the NCAP ratings for frontal crashes, NHTSA measures forces to the head and chest. Specifically, the injury criteria for the frontal star rating are the head, as measured by a composite of acceleration values known as the Head Injury Criterion (HIC), and the chest, as measured by a chest deceleration value known as chest Gs. Each of these two measures has its own injury risk curve that has been fixed at AIS level 4 or greater—that is, a severe, critical, or currently untreatable injury, as shown in figures 33 and 34. Using the mathematical functions that describe each of these injury risk curves, NHTSA transforms the HIC and chest G measures from the frontal NCAP test into probabilities of head and chest injuries of AIS level 4 or greater. The lower the HIC and chest G measures, the less risk of receiving a severe, critical, or currently untreatable injury to the head and chest in a full frontal crash. To convert the probability of severe injury for particular HIC and Chest G scores into a star rating for the frontal NCAP test, NHTSA adds the probability of severe injury to the head and chest and then subtracts the product, shown below in figure 35. NHTSA concluded that a combined effect of injury to the head and chest should be used since it is well documented that an individual who suffers multiple injuries has a higher risk of death. NHTSA calculates the probability of severe injury to the head and chest for both the driver and the front passenger dummies in the frontal NCAP test. Prob(Combined) = Prob(HIC) + Prob(Chest) - (Prob(HIC) * Prob(Chest)) To develop the NCAP ratings for side crashes, NHTSA measures forces to the ribs and lower spine. Specifically, the injury criteria for the side star rating are the greater acceleration of the upper or lower ribs and the acceleration of the lower spine. NHTSA averages these accelerations to generate a measurement known as the Thoracic Trauma Index (TTI). The TTI also has an injury curve that has been fixed at the AIS level of 4 or greater, as shown in figure 36. The lower the TTI measure, the lower the risk of receiving a severe, critical, or currently untreatable injury to the thorax and upper abdomen in a side crash. The Thoracic Trauma Index score and its associated probability of receiving an AIS level 4 or greater injury is the sole basis for the side NCAP star rating. NHTSA calculates probability of severe injury to the thorax and upper abdomen for both the front and rear dummies on the driver’s side. Using the probability of injury calculated from the frontal and side NCAP tests, NTHSA assigns a vehicle a rating of one (the worst) to five (the best) stars for each of the dummy occupants in each of the crashworthiness tests. The star ratings for the frontal and side tests correspond to the percentage chance of serious injury for each of these tests. The numerical boundaries between each star rating are determined by NHTSA. The frontal NCAP star boundaries are roughly twice as large as the side NCAP star boundaries because NHTSA uses a combined probability of injury to generate star ratings for the frontal NCAP test and uses only one probability of injury to generate star ratings for the side NCAP test. In addition, the forces and associated probabilities at the boundary between two and three stars for both the frontal and side NCAP tests are roughly equal to the relevant force thresholds for compliance with two Federal Motor Vehicle Safety Standards—numbers 208 and 214, respectively. To indicate the likelihood of a vehicle’s rolling over in a single-vehicle crash, NHTSA combines the risk of rollover in a single vehicle crash indicated by a measure of the vehicle’s top-heaviness, called the Static Stability Factor (SSF), with the results of a dynamic rollover test to produce a star rollover rating. The SSF is an indicator for the most frequent type of rollover, called a “tripped rollover,” which occurs when a vehicle leaves the roadway and its wheels are tripped by a curb, soft shoulder, or other roadway object, causing the vehicle to roll over. About 95 percent of rollovers are tripped. Because the SSF is an indicator of the most frequent type of rollover, it plays a significantly larger role in a vehicle’s star rating than do the results of the dynamic rollover test. The dynamic rollover test determines how susceptible a vehicle is to an on-road “untripped” rollover—a type that accounts for less than 5 percent of rollovers. Because untripped rollovers are so infrequent, the rollover test does not affect the vehicle’s star rating significantly, resulting in a difference of no more than half a star in the rating. The SSF is a calculation of a vehicle’s top-heaviness, defined as one-half of the vehicle’s track width divided by the height of the center of gravity (c.g.). A higher SSF value equates to a more stable, less top-heavy vehicle. SSF values across all vehicle types range from around 1.0 to 1.5. Most passenger cars have values in the 1.3 to 1.5 range, as shown in figure 37. Higher riding SUVs, pickups, and vans usually have values in the 1.0 to 1.3 range, also shown in figure 37. Many of the higher riding vehicles of previous model years are being redesigned to ride lower on a wider track to improve their rollover resistance and obtain a higher SSF rating. After determining the SSF, NHTSA selects certain vehicles for the dynamic rollover test. Not all passenger cars selected for NCAP testing undergo the dynamic test. Thus far, for most passenger cars, NHTSA has imputed or assigned a no-tip result for the dynamic test based on the testing of other passenger cars that are more top heavy (according to the SSF score) but did not tip up during the dynamic test. NHTSA periodically tests passenger cars to validate the imputed results. In the dynamic rollover test, a driver sits in the vehicle and conducts the test by applying the accelerator and initiating commands for the programmable steering controller, which actually maneuvers the vehicles, as shown in figure 38. The general steering parameters are 270 degrees (about a three-quarters turn) for the initial turn and 540 degrees (about one and one-half turn) for the correction turn, as shown in figure 39. Outriggers are attached to the vehicle to prevent the vehicle from tipping all the way over. The result of the dynamic rollover test is either “tip-up” or “no tip-up.” To receive a “no tip-up” result, a vehicle must reach a speed of 50 miles per hour (mph) on four dynamic test runs—two from left to right and two from right to left—without the inside wheels on either side of the vehicle simultaneously lifting at least 2 inches off the surface, and it must do this at two different steering wheel angles. Sensors are used to detect wheel-lift, as shown in figure 40. For the first run of each test, the speed is 35 mph, and subsequent runs are conducted at about 40 mph, 45 mph, 47.5 mph, and 50 mph, until the vehicle tips up or attains an entrance speed of 50 mph on the last run of each test without tipping up. The same series of tests is repeated at a different steering wheel angle. NHTSA first began to rate vehicles’ rollover avoidance in model year 2001, using the SSF alone to determine the star rating. At that time, NHTSA used a statistical procedure to determine how the SSF affects the risk of rollover. Physics theory would suggest that vehicles with a low SSF— vehicles that are more top-heavy—are more likely to roll over than vehicles with a high SSF. NHTSA’s empirical model confirmed this theory, showing that the lower the SSF, the more likely a vehicle is to roll over in a single- vehicle crash. For the first 3 years that NHTSA rated rollover risk, it used a linear model that examined accident report data at the state level. Following the passage of the TREAD Act, which required NHTSA to include a dynamic rollover test in NCAP, and the publication of a National Academy of Sciences report, which recommended that NHTSA use a nonlinear model to predict rollover risk, NHTSA altered its method of calculating rollover risk. NHTSA now links the SSF and the risk of rollover using a nonlinear model. In addition, NHTSA includes the results of the dynamic test—that is, whether a vehicle tips or not—in this new model, as shown in figure 41. A vehicle’s rollover rating is an estimate of its risk of rolling over in a single- vehicle crash, not a prediction of the likelihood of a rollover crash. The Insurance Institute for Highway Safety is a nonprofit research and communications organization funded by the U.S. auto insurance industry. The Insurance Institute has been conducting vehicle safety research since 1969, and in 1992 it opened the Vehicle Research Center to conduct vehicle crash tests. The Insurance Institute began crash testing and rating vehicles for frontal collisions in 1995 and for side collisions in 2003. The center conducts the Insurance Institute’s vehicle-related research, which includes controlled tests of vehicles and their components using instrumented crash tests, as well as studies of real collisions. Insurance Institute officials told us that scrutinizing the outcomes of both controlled tests and on-the-road crashes gives researchers—and ultimately the public—a better idea of how and why vehicle occupants are injured in crashes. This research, in turn, leads to vehicle designs that reduce injuries. The Insurance Institute buys the vehicles for crash tests directly from dealers. It also chooses vehicles for testing to represent both a range of manufacturers and the largest portions of new car sales, in an effort to cover as much of the marketplace as possible. The Insurance Institute tests vehicles in categories, such as small cars, minivans, and midsize SUVs. The Insurance Institute conducts two types of crash tests—an offset frontal test and a perpendicular side test. The offset frontal test is conducted at about 40 mph to simulate a typical head-on collision of two vehicles. The offset frontal test evaluates the potential for injuries caused to occupants by intrusion into the occupant compartment. The Insurance Institute uses a frontal impact dummy, called the 50th percentile Hybrid III dummy, in its frontal crash tests. This dummy represents a man of average size, 5 feet 9 inches tall and weighing about 170 pounds. Such dummies were designed to measure the risk of injury to the head, neck, chest, and lower extremities in a frontal crash. The Insurance Institute’s perpendicular side test measures the impact of a moving deformable barrier striking the driver’s side of a passenger vehicle at 31 mph. The barrier weighs 3,300 pounds and has a front end shaped to simulate the typical front end of a pickup truck or SUV. Two instrumented 5th percentile side-impact dummies (SID-IIs), representing small females or 12-year-old adolescents who are 5 feet tall and weigh about 110 pounds, are positioned in the driver’s seat and in the rear seat behind the driver to measure the impact of the vehicle crash. The SID-IIs dummies were designed to measure acceleration of the spine and ribs plus compression of the rib cage in a side crash. They are also equipped with unique load cells, which measure the force of the impact applied to the dummies during the crash. To evaluate a vehicle’s performance in the frontal crash test and develop an overall rating for the frontal test, the Insurance Institute uses three types of measures: (1) structural performance, the amount and pattern of intrusion into the occupant compartment during the offset test; (2) injuries measured by a Hybrid III dummy positioned in the driver’s seat; and (3) dummy kinematics, or the dummy’s movements during the test, as determined through an analysis of a slow-motion film. The structural performance assessment indicates how well the front-end crush zone managed the crash energy and how well the safety cage limited intrusion into the driver space. Figure 42 shows the intrusion levels on which a vehicle’s structural performance is rated. Injury measures are used to determine the likelihood of injury to various regions of the driver’s body. The measures recorded from the head, neck, chest, legs, and feet of the dummy indicate the level of stress/strain on that part of the body. Thus, greater numbers mean larger stresses and strains and a greater risk of injury. Because significant risk of injury can result from undesirable dummy kinematics in the absence of high injury measures, such as partial ejection from the occupant compartment through a window, a slow-motion film is used during the crash test. An analysis of this slow-motion film helps evaluate the interactions of the restraint system’s components—including the safety belts, air bags, steering columns, head restraints, and other components— to control the dummy’s movement. A vehicle’s overall frontal rating depends on the effectiveness of its structure, or safety cage, in protecting the occupant compartment, the risk of injury measured for an average-size male, and the effectiveness of the restraint system in controlling occupants’ movements. The structural performance and injury assessments are the major components of each vehicle's overall frontal rating; the dummy kinematics (movement) contributes less to the rating. A vehicle’s side crash test performance and overall rating are based on (1) the injury measures recorded on the two instrumented SID-IIs dummies positioned in the driver’s seat and in the rear seat behind the driver, (2) an assessment of head-protection countermeasures, and (3) the vehicle's structural performance during the impact. The injury measures are used to determine the likelihood that the driver, the passenger, or both would have sustained serious injury to various body regions. Measures are recorded from the head, neck, chest, abdomen, pelvis, and leg. These injury measures, especially from the head and neck and from the torso (chest and abdomen), are the major components of the vehicle's overall rating. To supplement head injury measures, the movements and contacts of the dummies' heads during the crash are evaluated. High head injury measures typically are recorded when the moving deformable barrier hits a dummy's head during impact. Moreover, a “near miss” or a grazing contact also indicates a potential for serious injury in a real-world crash because small differences in an occupant’s height or seating position, compared with a dummy’s, could result in a hard contact and high risk of serious head injury. The vehicle’s structural performance is based on measurements of intrusion into the occupant compartment around the B-pillar (between the doors). This assessment indicates how well the vehicle's side structure resisted intrusion into the driver’s and rear-seat passenger space. Some intrusion into the occupant compartment is inevitable in serious side crashes. The overall side rating depends on the risk of injury measured for small female occupants mainly to the head and neck and torso (chest and abdomen); the effectiveness of the occupant compartment in protecting the head; and the vehicle's structure performance during the impact. The overall side rating for any body region, based on the injury measures recorded on the two SID-IIs dummies, is the lowest rating scored for any injury within that region. The Insurance Institute’s rating system provides qualitative ratings of Good, Acceptable, Marginal, and Poor. The Insurance Institute provides one rating for the frontal test and one rating for the side test. Vehicle rating information is available on the Insurance Institute’s Web site, through press releases, and through television coverage. Figure 43 shows how the Insurance Institute communicated its ratings to consumers on the Internet. In addition to the ratings for frontal and side crashes, the Insurance Institute provided the results of various tests, such as those of the vehicle’s structural performance and of injuries to various body regions. Figure 44 shows how the Insurance Institute presented its ratings to consumers in its Status Report. The print version is available only to subscribers, and some of the publications can be downloaded from the Insurance Institute’s Web site. News magazine television shows, such as Dateline NBC, periodically use Insurance Institute crash test results and interview representatives, including the president or chief operating officer, as report segments for their programs. The Australian New Car Assessment Program (NCAP) provides information for consumers on the safety performance of new vehicles sold in Australia and New Zealand. The main purposes of the program are to provide new vehicle buyers with independent advice on vehicle occupant protection and to develop strategies for vehicle manufacturers to increase the level of passive safety in their vehicles. The program is funded by a consortium of the state government transport departments of New South Wales, Queensland, Victoria, South Australia, Tasmania, and Western Australia; automobile clubs through the Australian Automobile Association and New Zealand Automobile Association; the Land Transport Safety Authority of New Zealand; and the FIA Foundation for the Automobile and Society. The Australia Commonwealth Department of Transport and Regional Services has established minimum safety standards for vehicles sold in Australia and has conducted joint research projects with NCAP but has not contributed to the support of the crash test program. The Australia NCAP buys the vehicles that it crash tests directly from dealers, as would any consumer. The program selects vehicles on the basis of (1) actual or projected sales, to target vehicles that are most popular; (2) vehicle model, to account for standard or deluxe models, which may contain more expensive passive safety features such as air bags and advanced restraint systems; (3) new and popular body designs, to select the body design that is most popular or to allow for direct comparisons across different makes and models; (4) market segment, to target individual segments of the market to allow comparisons of results; and (5) vehicle price. Using these selection criteria, the Australia NCAP covers more than 70 percent of the new vehicle fleet by volume. The program also uses European NCAP (Euro NCAP) crash test results. However, the Euro NCAP results are intended to be used as a guide only, because the structure and equipment of the European specification model may differ materially from the model of the same name sold in Australia or New Zealand. The Australia NCAP tests and reports on vehicles in seven categories—small, medium, and large passenger cars; luxury cars; four-wheel drive vehicles (SUVs); multipurpose utility vehicles (small trucks); and sports cars. The Australia NCAP’s testing has evolved over time. Established in 1992, the Australia NCAP was originally modeled on the U.S. program and began rating vehicles in 1993. Initially, it conducted only a full frontal crash test, but it added an offset frontal test in 1994. In 1999, the Australia NCAP harmonized its tests and assessment procedures with the Euro NCAP through a memorandum of understanding. By harmonizing, it discontinued the full frontal crash test and began conducting the perpendicular side- impact test and pedestrian test. Australia NCAP officials have been considering eliminating the perpendicular side-impact test in favor of a pole test that they believe will more accurately test vehicles of all sizes for occupant protection. In 2004, the Australia NCAP performed three crash tests and a pedestrian protection test. The three crash tests include the 40 percent offset frontal, the perpendicular side-impact, and the side-impact pole tests. The offset frontal test involves pulling a test vehicle traveling at 40 mph (64 km/h) and crashing it into an offset deformable aluminum barrier. The deformable barrier has a crushable aluminum honeycomb face attached to a solid barrier. The deformable structure resembles the front-end characteristics of another vehicle. Two instrumented 50th percentile Hybrid III dummies (weighing about 194 pounds each) are used to collect data during the crash and are placed in the front driver’s and front passenger seats. Two child dummies, representing a 3-year-old and a 1-1/2-year-old child, are placed in the rear seats in appropriate restraints. While Australia NCAP does not use the measurements from the child dummies in its crash test rating, the dummies are included in the tests to maintain alignment with Euro NCAP testing. The perpendicular side-impact test involves pulling a barrier with a deformable face at about 31 mph (50 km/h) and crashing it into a stationary test vehicle at a 90 degree angle centered on the driver’s seating position. The moving deformable barrier has a mass of 2,095 pounds (950 kg) compared with 3,015 pounds (1,367 kg) for the U.S. barrier. One instrumented 50th percentile EuroSID-II dummy (weighing about 176 pounds) is used to collect data during the crash and is placed in the front driver seat. As in the frontal test, to maintain alignment with Euro NCAP’s testing, the two child dummies are placed in the rear seats in appropriate restraints. The pole side-impact test involves propelling a vehicle placed on a platform at 18 mph (29 km/h) into a cylindrical pole. The pole has a diameter of about 10 inches, or about 254 millimeters (mm), and its vertical axis is aligned with the front seat dummy’s head. One instrumented 50th percentile EuroSID-II dummy is used to collect data during the crash and is placed in the front driver’s seat. The pedestrian protection test evaluates the interaction of dummy parts and the bumper, hood, and windshield area of a vehicle. Adult and child- size dummy parts are propelled at specified areas of the hood and front bumper of a vehicle to simulate a 25 mph (40 km/h) car-to-pedestrian collision. The test simulates the impact of a lower leg against a bumper, a thigh against the lower edge of the hood, and an adult and a child head against the upper portion of the hood. Frontal tests in the Australia NCAP are scored on the basis of three types of observations--dummy measurements, a vehicle’s structural performance, and a post-crash inspection of the vehicle. The injury measurements are recorded from two Hybrid III dummies positioned in the front driver’s seat and front passenger seat. The injury assessment evaluates four body regions: (1) head and neck; (2) chest; (3) knee, femur, and pelvis; and (4) legs and feet. Structural performance is based on measurements indicating the amount and pattern of intrusion into the occupant compartment during the test. Dummy injury measurements and vehicle deformation can be compared with predicted values. Evidence of structural collapse can be determined by a post-crash inspection and by viewing a high-speed video recorded from various angles during the crash test. The post-crash inspection and video allow trained inspectors to assess dummy kinematics, evaluate the evidence of interior contacts, and inspect safety belts, seats, and air bags to ensure they operated as intended. For example, according to Australia NCAP officials, air bag performance could be compromised by the dynamics of a crash in ways that might not be evident from a post-crash inspection but could be revealed through careful analysis of the video. Each body region receives a score based on the dummy measurements, the vehicle deformation data, and the findings of the post-crash inspection (using modifiers). For example, excessive rearward movement of the steering wheel could lower the head score by a point to reflect identified risks. Other modifiers include lack of air bag stability, steering column movement, A-pillar movement, structural integrity, hazardous structures in the knee impact area, and brake pedal movement. For the side-impact and pole tests, the scores are based on injury measurements recorded on one EuroSID-II dummy positioned in the front driver’s seat. The injury assessment evaluates four body regions: the head, ribs, abdomen, and pelvis. A post-crash inspection and high-speed video are also used to evaluate structural collapse. A summary star rating shows the protection level indicated by the front and side-impact tests together. The summary score for the two tests is based on the point scores achieved in each test. Sixteen points can be achieved in the frontal test and 18 points in the side tests, for a maximum of 34 points. Two of the 18 points available in the side test come from the optional pole test, which assesses only one body region—the head. Each of the four body regions in the frontal test could receive a maximum score of 4 points, for a cumulative score of 16 points. Similarly, the four body regions in the side- impact test could receive a maximum score of 4 points, for a cumulative score of 16 points. If a vehicle has head-protecting side air bags, the manufacturer of the vehicle has the option of accepting a side impact pole test, through which 2 bonus points can be earned. The offset and side-impact scores are added together to produce an overall score with a maximum of 32 points. In addition, if a pole side test is conducted and shows good head protection, then 2 extra points can be earned, and up to 3 more points can be earned for having a safety belt reminder system. The points are translated into stars, as shown in table 3. If the injury score for the head, chest, abdomen, or pelvis is 0, then there is a high risk of a life-threatening injury. A warning note is added to the overall rating to highlight concern that there is a serious risk of injury in at least one vulnerable body region. The regions are the head or chest for the frontal impact test and the head, chest, abdomen, or pelvis for the side- impact test. For the pedestrian test, the scores are based on adult and child-size dummy parts (head and lower limbs) used to assess the severity of impact. The two different size dummy heads are tested at six areas of the hood, and the lower limbs for an adult and child are tested at three areas, for a total of 18 impacts tested for each vehicle. Based on the injury measurements recorded from the dummy parts, each impact can receive up to 2 points, and the maximum number of points that can be received is 36, as shown in table 4. A separate rating of one to four stars shows the level of pedestrian protection. The score reflects the results of the 18 impacts of the dummy parts against the specified areas of the bumper and hood. These results are summed to provide an overall score. The pedestrian protection star rating for a vehicle is based on the number of points received, or a maximum of 36 points. The points are translated into stars, as shown in table 5. The Australia NCAP’s reporting of results to the public has evolved over time. Initially, the program reported the raw test results for the head, chest, and legs. The program also portrayed the risk of injury in each area as high (red), medium (yellow), or low (green) and graphically represented the risk on an outline of a human figure in each area. When the offset frontal crash test was added in 1994, its results were reported in the same way. Also in 1994, the program began publishing tables comparing the results of the vehicles tested. In 1995, the Australia NCAP began summarizing full frontal and offset frontal head, chest, and leg test results by using bar charts to represent the percentage of risk of a life-threatening injury to drivers and to passengers. In 1996, the program began differentiating between upper and lower leg injuries, reported the results separately, and adopted the Insurance Institute for Highway Safety rating scale of Good, Acceptable, Marginal, and Poor. However, the program combined the scores for the full frontal driver and passenger tests with the score for the offset frontal driver test to arrive at an overall vehicle rating. According to Australia NCAP officials, subsequent research with focus groups supported the decision because the results indicated that consumers wanted the safety information in a simplified, summary form. In November 1999, to align with the Euro NCAP, the Australia NCAP first used a five-star system to report crash test performance. This system provided an overall rating along with a bar chart that enabled consumers to differentiate between vehicles with different scores that received the same number of stars. Today, the Australia NCAP makes vehicle rating information available on its Web site, through press releases, and through a safety brochure. Figure 45 shows how the program communicates its overall and pedestrian ratings to consumers on the Internet. According to Australia NCAP officials, the Australia NCAP also publishes the Crash Test Update, a brochure that provides new crash test results about twice a year. In addition to an overall star rating for each type of tested vehicle, the brochure presents star ratings with comparative bar graphs showing how well vehicles scored within the star levels. Figure 46 shows the brochure Australia NCAP officials provide for consumers. The European New Car Assessment Programme (Euro NCAP) provides information for consumers with an assessment of the safety performance of some new vehicles sold in Europe. The program was established and began rating vehicles in 1997. Its main purposes are to make comparative safety rating information available to consumers for vehicles in the same class and to provide incentives for manufacturers to improve the safety of their vehicles. The program is operated and funded by a consortium of six European governments—Catalonia, France, Germany, the Netherlands, Sweden, and the United Kingdom—and of various motoring and consumer organizations throughout Europe, including the General German Automobile Association (Allgemeiner Deutscher Automobil-Club e V); German Federal Ministry for Traffic, Building and Housing (Bundesministerium für Verkehr, Bau- und Wohnungswesen); United Kingdom Department for Transport; Dutch Ministry of Transport—Public Works and Water Management; FIA (Fédération Internationale de l'Automobile) Foundation for the Automobile and Society; Catalonia Department of Employment and Industry (Departament de Treball i Indústria); International Consumer Research and Testing; French Ministry of Equipment (Ministère de l'Equipement); Swedish Road Administration; and Thatcham. The Euro NCAP crash testing program was modeled from the U.S. NCAP (1979) and the Australia NCAP (1992). The decision process for Europe involves the use of technical working groups and subgroups to examine vehicle safety issues and make recommendations for change. Such groups are investigating the feasibility of incorporating such safety features as braking and handling, visibility and lighting, ergonomics, driver information, and whiplash into Euro NCAP. The automobile industry and public safety organizations may be involved in providing research or opinions, but the committees are free to make decisions they believe appropriate. Generally, decisions are made through two working groups, one for primary safety systems and one for secondary safety systems, that perform research and analysis. The Euro NCAP allows industry representatives to participate in the discussions of the subgroups of its two technical working groups. Also, the technical working groups and automobile manufacturers engage in direct dialogue in industry liaison meetings to address issues such as whiplash. Each member of the Euro NCAP is required to sponsor at least one vehicle for crash testing each year. The vehicles are normally acquired by the Euro NCAP Secretariat by various methods, including purchasing directly from dealers and selecting from manufacturers’ production lines. The Euro NCAP tests vehicles in categories—superminis, family cars, executive cars, roadsters, off-roaders, and multipurpose vehicles. The following further describes (1) the testing conducted, (2) the methods used for developing the vehicle crash ratings, and (3) the approaches taken to share the safety results with the public. The Euro NCAP performs three vehicle crash tests, a pedestrian protection test, and a child restraint test. The three crash tests are the 40 percent offset frontal test, the perpendicular side-impact test, and the side-impact pole test. The frontal test involves a moving test vehicle traveling at 40 mph (64 km/h) crashing into an offset deformable aluminum barrier where 40 percent of the vehicle’s width engages the barrier on the driver’s side. The deformable barrier used is a crushable aluminum honeycomb face attached to a solid barrier. The deformable structure is designed to replicate the essential characteristics of the front end of another car. Two instrumented 50th percentile Hybrid III dummies (each weighing about 194 pounds) are used to collect data during the crash and are placed in the front driver’s and front passenger seats. In the side-impact test, a moving trolley with a deformable barrier is towed at about 31 mph (50 km/h) into a stationary test vehicle at a 90 degree angle centered on the driver seating position. This test simulates a side-impact collision. The moving deformable barrier has a mass of 2,095 pounds (950 kg) compared with 3,015 pounds (1,367 kg) for the U.S. barrier. The European barrier’s face is smaller and much softer than the face of the barrier used in the U.S. NCAP. However, Euro NCAP officials said that because the barrier strikes a vehicle at a 90 degree angle, their side-impact test is more aggressive than NHTSA’s side-impact test. One instrumented 50th percentile EuroSID-II dummy (weighing about 176 pounds) is used to collect data during the crash and is placed in the front driver seat. The pole side-impact test consists of a vehicle placed on a platform and propelled at 18 mph (29 km/h) into a cylindrical pole. The pole has a diameter of 10 inches (254 mm), and its vertical axis is aligned with the front seat dummy’s head. One instrumented 50th percentile EuroSID-II dummy is used to collect data during the crash and is placed in the front driver’s seat. The pedestrian protection test evaluates the impact of dummy parts against the bumper, hood, and windshield areas of a vehicle. Adult and child-size dummy parts are propelled at specified areas of the hood and front bumper of a vehicle to simulate a 25 mph (40 km/h) car-to-pedestrian collision. The test simulates the impact of a lower leg against a bumper, a thigh against the lower edge of the hood, and adult and child heads against the upper portion of the hood. The child protection test evaluates a vehicle’s ability to protect children by assessing the performance of the vehicle’s child restraint system in front and side-impact tests. During these tests, two child-size dummies are placed in the manufacturer’s recommended child restraints in the rear seat of a vehicle. In the frontal test, a dummy with the weight and size of an 18- month-old child (about 24 pounds) is placed behind the passenger, and a dummy with the weight and size of a 3-year-old child (about 33 pounds) is placed behind the driver. In the side-impact test, the positions of the two dummies are reversed. The Euro NCAP bases its assessment of crashworthiness on three types of observations made during or after a crash test: (1) dummy measurements of forces to the body, used to assess injuries; (2) five measurements of vehicle deformation, used to assess the vehicle’s structural performance; and (3) post-crash inspection data for six areas, which are termed “modifiers” because problems in any one of them may result in a penalty that modifies the vehicle’s assessment score. In the offset frontal crash test, two instrumented Hybrid III dummies are positioned in the front driver’s seat and front passenger seat to measure injuries to four regions of the body: (1) head and neck; (2) chest; (3) knee, femur, and pelvis; and (4) legs and feet. The five structural measurements provide vehicle deformation data, indicating the amount and pattern of intrusion into the occupant compartment. The post-crash inspection provides information about air bag stability, steering column movement, A- pillar movement, structural integrity, hazardous structures in the knee impact area, and brake pedal movement. The dummy measurements and the vehicle deformation data are combined to generate a score—up to four points—for each body region. This score may be modified by findings from the post-crash inspection. In the side-impact and pole tests, injury measurements are recorded on one EuroSID-II dummy positioned in the front driver’s seat. These measurements provide data for assessing injuries to four body regions: the head, ribs, abdomen (chest or thorax), and pelvis. No structural or post- crash inspection data are gathered during these tests. Thus, the score for each body region is based on the dummy measurements alone. In the pedestrian test, readings taken from the adult and child-size dummy parts (head and lower limbs) are used to assess the risk of injury. The two different size dummy heads are tested at six different areas of the hood, and the lower limbs are tested at three areas, for a total of 18 impacts tested for each vehicle. Depending on the injury measurements recorded from the dummy parts, each impact can receive up to 2 points, and the maximum number of points that can be received is 36 points. See table 6. The child protection test consists of three assessments that are based on (1) dummy measurements and dynamic evaluations, (2) marking requirements for child restraint systems, and (3) a vehicle-based assessment. Points reflect the results of the three assessments. The first assessment uses dummy measurements taken from the two child dummies in the frontal and side tests, together with dynamic evaluations of ejection from the child restraint system and head contact within the vehicle. Another assessment evaluates whether the markings on the child restraint fully comply with the test requirements. The final assessment evaluates how easily the child restraint system can be used inside the vehicle. A combined star rating is used to show the protection level achieved in the offset frontal and side impact tests together. The score for this rating is the sum of the scores achieved in these two tests—up to 16 points for the frontal test and up to 18 points for the side test, for a maximum of 34 points. For both tests, each of four body regions can receive up to 4 points, for a cumulative score of 16 points per test, and for the side test, 2 additional points can come from an optional pole test, which assesses protection for only one body region—the head. The pole side-impact test is an option for the manufacturer of a vehicle that has head-protecting side air bags. Finally, up to 3 more points can be earned for having a safety belt reminder system. The points are translated into stars, as shown in table 7. If the crash tests demonstrate a high risk of a life-threatening injury, indicated by an injury score of 0 for the head, chest, abdomen, or pelvis, then a warning note is added to the overall rating. Euro NCAP uses a “struck star” to convey this warning. When the star is struck through, it highlights concern that there is a serious risk of injury in at least one vulnerable body region. These concerns are based on data from the offset frontal test for the head or chest and from the side-impact test for the head, chest, abdomen, or pelvis. A star cannot be struck because of findings from post-crash inspections showing the effects of modifiers. Euro NCAP provides a separate rating of one to four stars to show the level of pedestrian protection. The score for this rating sums the results of the 18 impact tests of dummy parts propelled into the specified areas of the bumper and hood. A vehicle can earn up to 2 points for each test, for a maximum of 36 points. The points are translated into stars, as shown in table 8. Euro NCAP also provides a separate rating of one to five stars to show the level of child protection. Currently, the tests on which this rating is based can produce a maximum of 49 points, but the rating scale allows further points to be awarded for future developments in child protection. Table 9 shows how the points are translated into stars. Vehicle rating information is available on the Euro NCAP Web site, through press releases, and through popular consumer magazines. Figure 47 shows the ratings that the program makes available to consumers on the Internet—a front and side-impact rating, a pedestrian protection rating, and a child restraint protection rating. The pedestrian protection rating is intended to encourage manufacturers to start designing for pedestrian protection. The child restraint protection rating is based on a vehicle’s performance using the child seats recommended by that vehicle’s manufacturer. Specifically, the rating depends on the fitting instructions for the child seats, the car’s ability to accommodate the seats safely, and the seats’ performance in front and side impact tests. In addition to star ratings, the Euro NCAP uses color-coded dummy injury diagrams to show how specific body regions performed in the frontal, side, and pole crash tests. The color-codes are: Good (green), Adequate (yellow), Marginal (orange), Weak (red), and Poor (brown). The colored injury diagrams display the risk of injury to the various body regions, as shown in figure 48. The Euro NCAP divides its testing into two phases and releases the results twice a year, in November and June. The results are posted on the program’s Web site, issued in press releases, and published by What Car? (a British car magazine), Which? Car (a magazine owned and produced by British consumer associations), and the General German Automobile Association (ADAC) magazine. Other consumer magazines in Europe provide additional crash test information. The National Agency for Automotive Safety and Victims’ Aid (NASVA) conducts the Japan NCAP and is funded by the government through the Ministry of Land, Infrastructure, and Transportation. According to NASVA officials, the Automobile Assessment Committee, made up of 12 members appointed by the ministry, oversees the program. The committee includes four working groups, each focusing on specific areas: crash tests, tests of active safety systems such as brakes, pedestrian tests, and tests of child restraint systems. NASVA officials conduct research in these areas and propose changes to the program that must be approved by the committee. NASVA officials said that the Japan NCAP is funded through appropriations from the Compulsory Automobile Liability Insurance that every car owner must pay. The Japan NCAP began testing vehicles in 1995, starting with a full frontal collision test. The program added the side-impact test in 1999 and the offset frontal test in 2000. Vehicles are selected for testing on the basis of sales. By 2004, the program had evaluated 79 vehicles representing over 80 percent of those that were on the market at that time. Ratings for 60 of these vehicles were carried over from previous years’ testing, and ratings for 19 vehicles were based on tests performed in 2003. Testing is conducted at the Japan Auto Research Institute under the control and supervision of NCAP officials. The institute crash tests cars, minivans, and SUVs and performs other NCAP tests, such as the brake and pedestrian tests. The research laboratory has one track for conducting frontal and side-impact tests. In these tests, either the vehicle is towed to strike the barrier, or, in side-impact tests, the barrier is moved to strike the vehicle. In 2005, the institute plans to open a new test facility with multiple tracks that will enable researchers to conduct vehicle-to-vehicle crash tests at various angles. The Japan NCAP performs a variety of safety tests and rates vehicles according to the results. It conducts three types of crash tests—a full frontal test, an offset frontal test, and a perpendicular side-impact test. In addition, it performs a braking test, which measures the performance of an active safety system that enables a driver to avoid a crash. The program further assesses how easily doors are opened and occupants are removed after a crash and how well vehicles perform if they strike pedestrians. The program also evaluates how well child safety seats perform. The Japan NCAP is the only program that conducts both the full frontal and the offset frontal crash tests. Together, the two tests assess both the potential for injuries caused by intrusion and the effectiveness of the vehicle’s restraint system. The full frontal test is performed by towing a vehicle to collide with a rigid barrier at 55 km/h (about 34 mph). This test simulates a head-on collision between two vehicles of the same size traveling at the same speed. The offset frontal test involves towing a vehicle into a deformable barrier that represents the front end of another vehicle and simulates a head-on collision of two vehicles traveling at 40 mph. In this test, only a portion of the front end (40 percent) engages the barrier, and the impact on the vehicle body is greater than the full frontal test because much of the crash energy is distributed to one side of the vehicle. Thus, there is the possibility of substantial vehicle deformation, which makes this test suitable for evaluating injuries caused to occupants by intrusion into the occupant compartment. The program uses the Hybrid III dummy that represents a man of about 5 feet 10 inches tall and weighing about 185 pounds. The side-impact test propels a moveable deformable barrier weighing about 2,090 pounds into the driver’s and passenger’s side of the vehicle, simulating a perpendicular collision at 55 km/h (about 34 mph). The barrier is shaped like the front end of a car, and because it is not rigid, its performance is intended to simulate a vehicle’s response in an actual collision. A EuroSID-I dummy is placed in the driver’s seat. This dummy is the same height as the Hybrid III dummy but weighs about 20 pounds less. The EuroSID-I dummy was designed to measure the risk of injury to the head, chest, abdomen, and pelvis. The Japan NCAP conducts a braking performance test that measures how far a vehicle travels before it stops and how stable it is at the time of braking when it is stopped abruptly while traveling at about 62 mph. The braking test is a test of an active safety system because it enables the driver to avoid a crash. The test is performed under wet and dry road conditions for a vehicle with a driver and a weight on the front passenger seat. To ensure consistent testing, Japan NCAP officials said, the dry road surface temperature must be 95.0 degrees plus or minus 18.0 degrees Fahrenheit and the wet road surface temperature must be 80.6 degrees plus or minus 9.0 degrees Fahrenheit because the temperature of the road surface affects the distance it takes to stop the vehicle. Japan NCAP officials also said that all braking tests must be performed at the same location because road surfaces vary and surface differences could affect test results. Professional drivers conduct the tests, and the speed of the vehicle and force with which the drivers depress the brake pedal are monitored electronically to ensure consistency. Three braking tests are conducted to be sure that the result is not due to a flaw in the testing process. Figure 49 illustrates the braking test. In addition, the Japan NCAP assesses and scores the ease with which doors can be opened and the dummies removed after a crash test. The purpose of the accessibility assessment is to rate how easily emergency responders can assist injured persons. The rating is based on whether the doors can be opened with one hand, two hands, or whether tools are needed to open the doors. The pedestrian test measures the effect of a pedestrian being hit by a vehicle traveling at about 22 mph if the pedestrian’s head strikes part of the hood or windshield. This test was initiated because pedestrian fatalities represent a high percentage of total vehicle fatalities in Japan. Dummies modeling the head of an adult or a child (head impactor) are projected toward the car hood from a testing machine. The force received by the head impactor is measured and then evaluated using a head injury criterion. The test is conducted on multiple points on each car, and the impact angles differ according to the shape of the front part of three types of vehicles—sedan, SUV, and van. Figure 50 illustrates how the test is performed. The pedestrian test is conducted on vehicles with three different body types, as shown in table 10. The Japan NCAP also assesses the safety performance of child seats in a car crash and the ease of using the seats. Child seats are installed in the rear passenger seats of a test vehicle stripped down to its body frame. The test uses dummies to represent a 9-month-old child and a 3-year-old child. The test vehicle is placed on a sled and subjected to a shock identical to the test speed used in the full frontal crash test. The Japan NCAP measures injuries to the head, neck, chest, and upper (femur) and lower (tibia) legs for both the full frontal and offset frontal crash tests. Points vary by body region, from 2 points for upper and lower leg injuries to 4 points for head, neck, or chest injuries, according to the extent of injuries as measured by crash test dummies. Vehicle deformation is measured after the crash test, and if certain limits are exceeded, a point is deducted from the score for one body area, according to where the deformation occurred. In addition, weighting factors are assigned according to the frequency of injuries to these body areas in vehicle crashes. The weighted points for each body area are then combined to arrive at separate total point scores for the driver and the passenger in full frontal and offset frontal crash tests. The maximum score that a vehicle can achieve is 12 points because of the way the injuries are weighted. For the side-impact crash test, the Japan NCAP measures injuries to the driver’s head, chest, abdomen, and pelvis. Four points are assigned for each body area and then weighted according to the incidence of injuries in this type of accident, with lesser weights assigned to the abdomen and pelvis than to the head and chest. Again, the maximum score that a vehicle can achieve is 12 points, because of the way injuries to the driver are weighted. The Japan NCAP is the only program that adjusts its test results by weighting the injury scores according to historical crash data. NCAP officials said they can do this because the police are well trained to investigate every accident and provide thorough reports to the government. For the pedestrian test, a series of head injury scores is used to assign injury probability levels from 5 (the best) to 1 (the worst). The results are then combined to arrive at an overall score. According to NCAP officials, vehicles with hoods that are more flexible and compress upon impact can receive better scores than those that are rigid and leave no room between the hood and the engine for the impact to be absorbed. Child seats are evaluated according to their performance in a collision and their ease of use. For the collision test, overall ratings of Excellent, Good, Normal, and Not Recommended are assigned. The ratings are primarily based on the head and chest injury scores taken from the dummies used in the test. Five child seat specialists assessed the ease of installation, the ease of understanding the instructions, the product warning labels and markings to aid in installation, the structural design, and the ease of securing the child in the seat. For each area, the specialists assigned points, from 5 (the best) to 1 (the worst). The scores given by the specialists were averaged and reported separately for each area. Initially, the Japan NCAP used a four-letter system to rate vehicles’ crashworthiness, in which “A” reflected the highest scores for performance and “D” reflected the lowest scores. As vehicles’ performance improved, more and more vehicles achieved an “A” rating. To help consumers better differentiate vehicles’ performance, NCAP officials expanded the range of ratings to include AA and AAA. This same scale was later converted to six stars. Many vehicles have achieved a five-star rating, and some have received a six-star rating for occupant protection. In addition to the star ratings, the Japan NCAP reports the percentage of possible points that each vehicle received and provides a bar chart indicating how well the vehicles performed in these tests. Figure 51 shows how the Japan NCAP communicates its ratings to consumers as two overall ratings—one for the driver’s seat and one for the passenger’s seat. The overall safety rating for the driver’s seat combines the results of the two frontal crash tests (full and offset) and the side-impact test. The overall safety rating for the passenger’s seat includes the results of the full frontal and offset frontal tests. The Japan NCAP also provides consumers with star ratings by type of test for the driver’s and passenger’s seats and makes the detailed test information available to consumers for each crash test, as shown in figure 52. Consumers are also provided with ratings on how difficult it was to open the door after the test (openability) and how difficult it was to retrieve the dummy from the vehicle after the crash test (rescueability), as shown in figure 53 and 54 respectively. Although not shown as part of the crashworthiness rating, the ratings for the pedestrian tests are provided, as well as the ratings for the child restraint seats (Excellent, Good, Normal, and Not Recommended). Furthermore, the Japan NCAP has provided consumers with comparative information on vehicles’ braking capability on wet and dry pavements. (Goal average 80.5%) (Goal average 84.8%) In addition to those named above, Vashun Cole, Michelle Dresben, Colin Fallon, Kathleen Gilhooly, Doug Manor, Terry Richardson, Beverly Ross, Brian Sells, Jena Sinkfield, Stacey Thompson, and Frank Taliaferro made key contributions to this report.
In 2003, 42,643 people were killed and more than 2.8 million people were injured in motor vehicle crashes. Efforts to reduce fatalities on the nation's roadways include the National Highway Transportation Safety Administration's (NHTSA) New Car Assessment Program. Under this program, NHTSA conducts vehicle crash and rollover tests to encourage manufacturers to make safety improvements to new vehicles and provide the public with information on the relative safety of vehicles. GAO examined (1) how NHTSA's New Car Assessment Program crash tests vehicles, rates their safety, and reports the results to the public; (2) how NHTSA's program compares to other programs that crash test vehicles and report results to the public; and (3) the impact of the program and opportunities to enhance its effectiveness. NHTSA conducts three types of tests in the New Car Assessment Program--full frontal and angled side crash tests and a rollover test. Each year, NHTSA tests new vehicles that are expected to have high sales volume, have been redesigned with structural changes, or have improved safety equipment. Based on test results, vehicles receive ratings from one to five stars, with five stars being the best, to indicate the vehicles' relative crashworthiness and which are less likely to roll over. NHTSA makes ratings available to the public on the Internet and through a brochure. Other publications, such as Consumer Reports, use NHTSA's test results in their safety assessments. GAO identified four other programs--the Insurance Institute for Highway Safety's program and the New Car Assessment Programs in Australia, Europe, and Japan--that crash test vehicles and report the results to the public. They share the goals of encouraging manufacturers to improve vehicle safety and providing safety information to consumers. These programs conduct different types of frontal and side crash tests, and some perform other tests, such as pedestrian tests, that are not conducted under the U.S. program. Only the U.S. program conducts a rollover test. The other programs measure test results differently and include more potential injuries to occupants in ratings. They also reported their test results differently, with all summarizing at least some of the scores or combining them into an overall crashworthiness rating to make comparisons easier. NHTSA's New Car Assessment Program has been successful in encouraging manufacturers to make safer vehicles and providing information to consumers. However, the program is at a crossroads where it will need to change to maintain its relevance. The usefulness of the current tests has been eroded by the growing number of larger pickups, minivans, and sport utility vehicles in the vehicle fleet since the program began. In addition, NCAP scores have increased to the point where there is little difference in vehicle ratings. As a result, the program provides little incentive for manufacturers to further improve safety, and consumers can see few differences among new vehicles. Opportunities to enhance the program include developing approaches to better measure the interaction of large and small vehicles and occupant protection in rollovers, rating technologies that help prevent crashes, and using different injury measures to rate the crash results. NHTSA also has opportunities to enhance the presentation and timeliness of the information provided to consumers.
Over the years, Congress has established many employment-related programs to help people with disabilities obtain jobs. Four of these programs, PWI, Supported Employment State Grants, Randolph-Sheppard, and JWOD, illustrate several different approaches taken by Congress to create more employment opportunities for people with severe disabilities—from providing job training and support to enabling individuals to run businesses. Congress created two of the four programs (PWI and Supported Employment State Grants) in the 1970s and the other two (Randolph-Sheppard and JWOD) in the 1930s. PWI was established in 1978, and is a discretionary grant program that provides financial assistance for up to 5 years to organizations to assist individuals with disabilities in obtaining competitive employment. However, according to Education officials, recent grants have been awarded for 3-year periods. Grantees of the PWI program include community rehabilitation program providers, employers, labor unions, nonprofit agencies or organizations, trade associations, and others. The purposes of the PWI program are to (1) create and expand job and career opportunities for individuals with disabilities in the competitive labor market by engaging private industry as partners in the rehabilitation process, (2) identify competitive job and career opportunities and the skills needed to perform these jobs, (3) create practical settings for job readiness and job training programs, and (4) provide job placements and career advancements. PWI grantees must establish business advisory councils (BAC) comprised of representatives of private industry, organized labor, and individuals with disabilities and their representatives, and others. BACs are required, among other things, to identify jobs and careers available in the community, the skills necessary to perform them, and prescribe appropriate training and job placement programs. Seventy- nine grantees received about $22 million in fiscal year 2005 and served more than 10,000 individuals with significant disabilities. The Department of Education is responsible for administering and overseeing PWI and is required to: Conduct annual on-site compliance reviews—Education’s primary means of verifying the accuracy of the information grantees submit—of at least 15 percent of grant recipients, chosen at random. Submit an annual report to Congress that analyzes the extent to which the individual grant recipients have complied with the evaluation standards. For example, the project must serve individuals with disabilities that impair their capacity to obtain competitive employment. In selecting persons to receive services, priority must be given to individuals with significant disabilities. Have a performance reporting system that grantees can use to routinely submit program data that evaluates the grantees’ progress in achieving the stated objectives, the effectiveness of the project in meeting the purposes of the program, and the effect of the project on its participants. Established in 1978, the Supported Employment State Grants program provides funds to states and is designed to assist states in developing collaborative programs with appropriate organizations to provide supported employment services to individuals with the most severe disabilities who require these services to enter or retain competitive employment. Supported Employment State Grants funded services include a wide array of employment-related activities ranging from intensive on- the-job skills training to discrete post-employment services such as job station redesign or repair and maintenance of technology to help them perform job functions, generally for up to 18 months after job placement. In fiscal year 2005, the grant program was funded at approximately $37 million. The Supported Employment State Grants program is an integrated component of state VR programs, which are also overseen by Education. Title I of the Rehabilitation Act of 1973 authorizes a federal-state VR program to provide services to persons with disabilities so that they may prepare and engage in meaningful employment. Education provided $2.6 billion in fiscal year 2005 in VR grants to the states and territories based on a formula that considers the state’s population and per capita income. Each state and territory designates a single VR agency to administer the VR program, except where state law authorizes a separate agency to administer VR services for individuals who are blind. State VR agencies provide services to individuals in 22 service categories, such as vocational counseling and guidance, job placement assistance, on-the-job supports, college or university training, rehabilitation technology, and interpreter services. State VR agencies that determine they will not be able to serve all eligible individuals who apply for services must develop criteria for prioritizing services to individuals with the most significant disabilities. Education reported that, as of fiscal year 2006, 40 of the 80 state VR agencies had such an order of selection. Oversight for Supported Employment State Grants is conducted as part of oversight of state VR programs, and Education is required to: conduct annual reviews of state VR programs that include collecting and reporting information on budget and financial management data, and an analysis of program performance, including relative state performance, based on the standards and indicators; and conduct periodic on-site monitoring of state VR programs. The Randolph-Sheppard program was created in 1936 to provide blind persons with gainful employment, enlarge their economic opportunities, and encourage their self-support. While Randolph-Sheppard is under the authority of Education, the states are primarily responsible for operating their programs, and every state except Wyoming has established a vendor program. Each state that has a Randolph-Sheppard program is required to have a state licensing agency (SLA), under the auspices of the state VR program and approved by Education, to operate the program, including the authority to promulgate rules and regulations that govern the program. The SLAs are responsible for training, licensing, and placing people who are blind as operators of vending facilities (machines, snack bars, and cafeterias) located on federal and other properties. In addition, SLAs must annually submit information about their Randolph-Sheppard programs to Education, including information on the number of applicants and the number accepted, the number of vending facilities and vendors, and the total amount of vendor earnings. In fiscal year 2005, SLAs spent about $37 million in federal and state VR grant funds to help operate and support the program. In addition to VR funds, some states fund the program through optional set-asides from licensed vendors, which are a percentage of their revenues, and through the profits from vending machines located on federal properties that are not operated by licensed vendors. State funds are also used to operate the program. In total, more than $76 million were used to operate and support the Randolph-Sheppard program nationwide in fiscal year 2005. In fiscal year 2005, the Randolph-Sheppard program generated $661.3 million in total gross income and the average annual earnings of vendors was $43,584. Over a 5-year period (fiscal year 2001 through fiscal year 2005), the number of vending facilities have been in decline nationwide, decreasing from 3,193 to 3,080. Over the same period, the number of vendors decreased annually except in fiscal year 2005, as shown in figure 1. While states are responsible for operating their programs, among other things, Education is required to: approve applications from a state’s VR agency to serve as the SLA, and approve the rules and regulations the SLA promulgates to implement the Randolph-Sheppard Act; conduct periodic evaluations of the program to determine whether the program is being used to its maximum potential; and convene arbitration panels and pay for arbitration to resolve vendor and SLA disputes. Established in 1938, JWOD is a federal procurement set-aside program designed to increase employment and training opportunities for persons who are blind or have other severe disabilities. Through this program, the government purchases commodities and services from nonprofit agencies employing workers who are blind or have severe disabilities. According to Committee for Purchase officials, in fiscal year 2006, federal procurement expenditures for goods and services provided by JWOD program suppliers totaled about $2.3 billion, and provided employment for about 48,000 people who are blind or have severe disabilities at more than 600 participating JWOD nonprofit agencies. The types of employment opportunities range from working in food service or providing janitorial services in federal office buildings to producing and/or assembling boxes and office supplies such as pens, notepads, file folders, and other goods. For fiscal year 2005, Committee for Purchase officials reported that JWOD workers earned an average of $9.49 per hour. The Committee for Purchase, which administers the program, received about $5 million in federal funds in fiscal year 2005 to support the activities of a 15-member, presidentially appointed board and 29 full-time program staff, including managing the JWOD procurement list. The Committee for Purchase is required by law to designate one or more central nonprofit agencies to facilitate the distribution of federal procurement contracts among qualified nonprofit agencies, and has designated two agencies for this purpose: NIB, which represents member nonprofit agencies employing individuals who are blind, and NISH, which represents its member nonprofit agencies that employ individuals with other severe disabilities. In addition to its duties related to establishing and maintaining a procurement list of goods and services that must be purchased through qualified JWOD suppliers, the Committee for Purchase is required to: establish rules, regulations, and policies to carry out the purposes of the JWOD program, and to provide that nonprofit agencies employing individuals who are blind have priority in obtaining JWOD contracts; monitor nonprofit agency compliance with Committee for Purchase inform federal agencies about the JWOD program and encourage their participation, and, to the extent possible, monitor federal agencies’ compliance with JWOD requirements; and study and evaluate its activities on a continual basis to ensure the effective and efficient administration of the JWOD Act. The Committee for Purchase has also established regulations that require the two central nonprofit agencies (NIB and NISH) to evaluate the qualifications and capabilities of nonprofit agencies that apply for contracts and provide pertinent data concerning the JWOD nonprofit agencies, such as their status as qualified nonprofit agencies, and their manufacturing or service capabilities. Additionally, NIB and NISH are to monitor and inspect the activities of participating nonprofit agencies to ensure compliance with the JWOD Act and appropriate regulations. For example, to maintain its status as a qualified nonprofit agency organized for the purposes of the JWOD program, an agency must employ persons who are blind or have severe disabilities to perform at least 75 percent of the work-hours of direct labor during the fiscal year to furnish such commodities or services (whether or not the commodities or services are procured under the JWOD Act). The Committee for Purchase’s regulations require that each nonprofit agency maintain employment files for persons with severe disabilities participating in the JWOD Program. Each file must contain either a certification by a state or local government entity or a written report signed by a licensed physician, psychiatrist, or qualified psychologist, reflecting the nature and extent of a participant’s disability or disabilities that qualify as severe. These reports must also state whether an individual with severe disabilities is capable of engaging in normal competitive employment and be signed by persons qualified to evaluate their work potential, interests, aptitudes, and abilities. Federal performance goals and measures have been established for three of the four programs we reviewed. Education has not established performance goals and measures for the Randolph-Sheppard program, although two of the four states that we visited had their own goals and measures. Education has one goal for the Supported Employment State Grants program, but the goal only provides an indirect measure of the program’s performance because the data also include individuals with significant disabilities who receive supported employment services funded under state VR programs. Education has established a performance goal for PWI, which is consistent with the purpose of the program. Finally, for the JWOD program, the Committee for Purchase recently revised its performance goals and established some targets. Education does not have GPRA performance goals for the Randolph- Sheppard program, and neither the Randolph-Sheppard Act nor its implementing regulations require them. According to Education officials, no formal federal performance goals or measures currently exist for the Randolph-Sheppard program, but they are under development and expected to be completed by April 2007. Although not specifically required by law, Education does collect some information related to program performance from the states. For example, Education collects information on total vendor income, number of facilities, and vendors. Education also collects information on the numbers of individuals who are blind or have disabilities who are employed by vendors, although there is no requirement for vendors to employ workers who have disabilities. States may develop performance goals for their Randolph-Sheppard programs, and two (Arizona and Kansas) of the four states we visited had established performance goals. Arizona’s goals were to increase the number of licensed vendors and vending facilities. In fiscal year 2005, the state set a target of 32 vendors and five new facilities. However, Arizona did not meet these targets and had 30 licensed vendors and added one new facility. Kansas’ fiscal year 2005 goal was that at least 90 percent of the licensed vendors maintain or increase their level of income from the prior year, and the state reported that this goal had been exceeded in each of the past 3 fiscal years. According to program officials in New York and North Carolina, no state goals were established for their Randolph- Sheppard programs. In the four states we visited, nearly $10 million, including more than $7 million in federal and state funds, were used to support operations for about 215 licensed vendors, as shown in table 1. In these four states, we also interviewed seven licensed vendors who operated businesses that ranged from full-service cafeterias to small convenience stores or canteens. We found that all of the licensed vendors we met with worked full-time and most earned incomes that provided an income that made them relatively self-sufficient. However, not all licensed vendors nationwide receive incomes that allow them to support themselves and their incomes may be subsidized through program revenues generated by other vendors. Two states we visited were taking steps to increase vendors’ income by consolidating facilities. Regardless of their financial status, three of the seven licensed vendors we interviewed continued to receive financial benefits from other federal disability assistance programs, such as Social Security Disability Insurance. In addition, at least 6 of the 7 vendors employed fewer than 10 workers, most of whom were not blind or did not have severe disabilities. However, some of the vendors we interviewed told us that they are interested in ways to reach out to and employ more workers who are blind or have severe disabilities. Further, states we visited told us about other program challenges, such as a decline in customers as a result of increased security in federal buildings and consolidation of unprofitable facilities that reduced the number of opportunities available to vendors. Education has a GPRA goal for the Supported Employment State Grants program that the department uses to indirectly measure the program’s performance. According to program officials, Education has not sought information that isolates the performance of federally-funded Supported Employment State Grants because they are used together with state and other federal funds to provide supported employment services, as is often the case when funds from different sources are used to achieve an outcome. Officials told us a separate measure for the Supported Employment State Grants program would be an artificial distinction. The performance goal is for individuals who have significant disabilities to achieve high quality employment. For this goal, Education only includes individuals with significant disabilities who have a goal of supported employment, that is, achieving competitive employment with support services such as rehabilitation technology or on-the-job supports. The measure is the percentage of individuals who achieve competitive employment, which they define as making minimum wage or higher, but not less than the wages paid to workers without disabilities performing similar work, and working alongside workers without disabilities in an integrated setting. During fiscal years 2003 and 2004, Education exceeded its performance target for placing Supported Employment State Grants program participants in competitive employment. For fiscal year 2005, Education increased the performance target to 93 percent and achieved 92.6 percent, as shown in table 2. Education has established a GPRA performance goal that includes four measures for the PWI program. The goal of the PWI program is to create and expand job opportunities for people with disabilities in the competitive labor market by engaging business and industry in the rehabilitation process. The four performance measures are consistent with the program’s goal. For example, one measure is the percentage of individuals served by the program who were placed into competitive employment. Another measure, cost per placement, was only recently established for fiscal year 2006 and performance data are not yet available. In recent years, Education has had mixed success in meeting the GPRA targets. For example, in fiscal years 2003 to 2005, the PWI program did not meet its target of increasing the percentage of individuals who were placed into competitive employment. However, it consistently exceeded its target for increased earnings over the same period, as shown in table 3. Education has revised its GPRA performance measures for the PWI program for fiscal year 2006. Specifically, Education will begin to measure the percentage of PWI projects whose cost per placement is within a specified range, which has yet to be determined. The agency will also measure the percentage of all individuals who exit the program and are placed in competitive employment. According to Education officials, this measure was added in response to recommendations by the Office of Management and Budget and will allow more accurate comparisons with other job training programs throughout the government. We visited four PWI grantees in the four states and found that these projects set goals that are consistent with the goals of the PWI program, such as placing people in competitive employment. For example, one PWI grantee in Kansas is serving individuals with all types of disabilities ages 16 and older. One goal of the project is to place 75 percent of the people they serve each year in competitive employment, which is higher than the GPRA target set by Education. According to agency officials, clients are being placed in jobs such as call centers and other customer service positions, earning average salaries of $9.25 to $10.00 per hour. In another example, one PWI grantee in New York has a goal to transition youth from school to work and targets its services to individuals ages 16 to 25 with a mental, physical, or emotional disability. One of the goals of the project is to place about 67 percent of the people served in jobs over 3 years. According to the grantee, a successful outcome in the program is competitive employment in at least a part-time position paying at least the federal minimum wage, and continued employment for at least 6 months. The PWI projects in Kansas and New York just completed the first year of operations. For the first time, the Committee for Purchase developed performance goals and measures for the JWOD program in its fiscal year 2005-2007 strategic plan and updated this plan in October 2006, but it has not yet reported progress toward meeting these goals. While JWOD’s enabling legislation and regulations do not require goals, the strategic plan includes five performance goals and a number of measures for each of these goals. The current strategic plan includes some performance measures and targets, but some of the measures that are more qualitative in nature do not include targets, and it is unclear how JWOD will measure progress in these areas. In keeping with the overall mission of the JWOD program, the goals are aimed at increasing the number of job opportunities for people who are blind or have severe disabilities. One of the plan’s five goals is to expand employment opportunities. The other four goals include increasing customer satisfaction (JWOD customers are federal agencies), improving efficiency of operations, expanding program support and developing new markets for its products and services. However, these goals do not specifically address one part of the program’s mission, which is to increase training opportunities. Furthermore, some of the performance measures are not clearly defined or may be difficult to measure, thus making it difficult to assess performance. For example, there are several measures that involve using “milestone tracking” although the milestones are not provided. One of these measures will track progress toward annually updating and implementing a plan to address communication and information sharing with and among stakeholders. Further, JWOD has over 30 performance measures, which may make it difficult to identify performance problems. As we discussed in our June 1996 guide on implementing GPRA, performance measures should be limited to the vital few. Limiting measures to core program activities enables managers and other stakeholders to assess accomplishments and make decisions without having an excess of data that could obscure rather than clarify performance issues. The JWOD performance goals and examples of measures are shown in table 4. All of the JWOD performance measures are listed in appendix II. Education and the Committee for Purchase engage in a number of oversight activities for the programs they are responsible for, but their efforts to ensure compliance with applicable laws and regulations have been uneven, and overall have provided little assurance of program accountability for two of the four programs reviewed. Specifically, Education has established oversight procedures for the PWI and the Supported Employment State Grants programs that, if consistently followed, should provide reasonable assurance of compliance with relevant laws and regulations. The agency is just beginning to conduct on- site monitoring of PWI grantees that may be sufficient for testing the accuracy of the information used to monitor compliance. Education’s oversight of these two programs has generally been more active than its oversight of the Randolph-Sheppard program. Education relies primarily on self-reported data for its monitoring of the Randolph-Sheppard program and does not routinely analyze or report the data it collects. Finally, the Committee for Purchase has established procedures for monitoring and overseeing the JWOD program, but has prescribed regulations that delegate most of the responsibility for carrying out these procedures to two central nonprofit agencies that are also responsible for representing the interests of the JWOD nonprofit agencies they monitor, raising questions about independence. Furthermore, there are no procedures in place for the Committee for Purchase to address instances where the central nonprofit agencies fail to carry out their oversight responsibilities. Education regularly performs a number of oversight activities to ensure that PWI grantees are making progress toward project goals and complying with applicable laws and regulations. Specifically, program specialists told us they conduct quarterly monitoring calls with all PWI grantees in which they ask a series of 30 questions that help them to identify and proactively resolve problems with individual projects. The questions address several areas, including progress toward meeting goals, activities of the BACs, interaction with the state VR agency, and fiscal management. Further, Education requires that PWI grantees submit annual reports that include detailed information about project activities and performance, and informs grantees of this requirement as part of the application process. Education uses the project information it receives from grantees to identify those grantees that may be at risk of being out of compliance with program requirements and to target these grantees for additional assistance or for on-site reviews. Education also relies on the data it receives from grantees to provide information about grantees’ performance in its annual reports to Congress. Although grantees are responsible for monitoring their own projects, Education is required to conduct random on-site reviews of 15 percent of PWI grantees annually. On-site reviews are the primary means by which Education can assess the accuracy of the performance data submitted by grantees. Education conducted 11 of the 12 required on-site reviews in fiscal year 2006 and had scheduled the remaining review for November 2006. However, it conducted only 3 of the 12 required for 2005, and 0 in 2004, and therefore did not have enough information to provide reasonable assurance of the accuracy of the data submitted by grantees in those years. Although each of the PWI grantees that we visited had procedures in place to review the data they submitted to Education, a research organization conducted an evaluation of the program that raised doubts about the accuracy of PWI data submitted by grantees in general. Specifically, reviewers found that about one-fifth of PWI grantees surveyed (19 out of 92) provided information on the number of persons placed in fiscal year 2001 that was inconsistent with the information they had submitted to Education. Education is also required to submit an annual report to Congress analyzing the extent to which the individual grant recipients have complied with program evaluation standards. In fiscal years 2003, 2004, and 2005, Education has met this requirement by providing summaries of the extent to which grantees have met program performance targets in its Performance and Accountability Report to Congress. Education’s oversight of the Supported Employment State Grants program is integrated into its ongoing efforts to review and monitor state VR programs. During fiscal year 2006, Education revised its annual state plan review protocols and prepared a draft on-site monitoring plan for the VR program. Annual reviews include examining each state’s VR program plans and other documentation, such as required annual data reports on VR customers, services, and outcomes; caseloads; and financial accountability and data reporting procedures. Education’s October 2006 draft on-site monitoring protocols call for on-site reviews once every 3 years and are designed to verify and supplement the information it receives from the states regarding program performance and compliance, and include reviewing case files and holding public hearings or other discussions with VR program consumers and advocates, as needed. Education has not yet conducted any on-site reviews using the revised protocols, but plans to conduct its first reviews beginning in fiscal year 2007. Once fully implemented, the annual reviews and on-site monitoring, along with state-level activities, should offer reasonable assurance that the Supported Employment State Grants program is in compliance with applicable laws and regulations and the data that states submit to Education annually are accurate. In addition, we found that all four states we visited had their own accountability procedures for ensuring that VR grant funds, including Supported Employment State Grants funds, were being used in accordance with federal laws and regulations. For example, the New York state VR agency has configured its automated information management system in a way that only authorizes payment for supported employment services to providers that have a contract to provide these services, at the contracted rates. In addition, three of the four states had adopted performance-based contracting systems, whereby vendors providing supported employment services, such as job coaching or training, are required to demonstrate progress toward required milestones in order to receive payment from the state agencies, and VR counselors monitor their progress on a weekly, biweekly, or monthly basis. Education provides little oversight of the Randolph-Sheppard program. Despite being required to conduct periodic evaluations of the program and being responsible for approving states’ rules and regulations for implementing the Randolph-Sheppard Act, Education has no formal procedures for evaluating state programs. In addition to lacking procedures, Education has performed few on-site monitoring reviews of SLAs in recent years. According to agency officials, Education has performed five on-site monitoring reviews since the beginning of fiscal year 2005 and had performed no recent site visits in the four states that we visited. Education’s oversight activities primarily consist of collecting data from states through annual reports from the SLAs that administer the program and providing requested technical assistance. Although the states report considerable information including earnings data; the number of vendors, facilities, individuals employed by vendors; types of facilities; costs; and sources of funding, Education does not test the accuracy of data that it requires states to report, nor does the agency routinely analyze the data to assess program performance and management. As a result, Education cannot assess trends in performance, identify possible best practices, or help states that may need assistance. Upon request, Education also provides technical assistance to SLAs. According to Education officials, technical assistance and guidance is regularly provided to SLAs through telephone calls and written correspondence, including e-mails, with staff on specific questions. In its oversight role, Education has not provided clear guidance to states on emerging issues that could have nationwide implications. Instead, Education responds to individual state concerns and convenes panels to arbitrate disputes that SLAs are unable to resolve. As a result, states have different policies regarding the permissibility of teaming agreements, which partner licensed vendors with commercial food operators in order to help manage larger food service operations at dining facilities at military bases. SLAs may have such agreements for various reasons, such as state program officials’ lack of expertise or licensed vendors’ inexperience running such facilities. In these cases, the licensed vendor generally does not operate the food service facilities, but rather manages some aspects of food service operations. For example, one of the states we visited (Kansas) had a teaming agreement. One of the states we visited (New York) does not currently allow teaming agreements, while another (Arizona) has no policy regarding teaming agreements. The fourth state, North Carolina, permits teaming agreements but does not currently have any as of June 2006. Although Education has noted the increasing use of teaming agreements, it has not issued guidance to the SLAs directly addressing whether these are in keeping with the spirit of the Randolph-Sheppard Act, or whether they should be subject to limitations, despite concerns expressed by states and others. For example, the California State Auditor found that by allowing teaming agreements, the SLA had inadequately protected the interests of the state and licensed vendors. The SLA had not (1) ensured that written contracts existed before beginning operations, (2) analyzed the investment and return on investment of the teaming agreement to the program and licensed vendors, (3) adequately reviewed the teaming agreements, or (4) ensured that the commercial food service operators were paying their fair share of program costs. In addition, the Georgia State Auditor identified some concerns about teaming agreements, including the failure to define the duties for participating licensed vendors, resulting in these vendors having little, if any, responsibility for the overall operation and success of subcontracted food services. Further, the auditor noted that the program is not ensuring that the commercial food service operators are making progress toward the program’s goal that licensed vendors eventually assume responsibility for operating the facility. Additionally, Education has not provided clear guidance or policies regarding when federal agencies may charge fees or commissions to licensed vendors as a condition of operating a vending facility on federal property. The Randolph-Sheppard Act has been interpreted to prohibit commissions unless federal agencies obtain written approval from the Secretary of Education. We found that licensed vendors have paid commissions or fees in some locations but not in others and the federal agencies had not obtained approval from Education. For example, in one state we visited (Kansas), at least one licensed vendor was required to pay 1.5 percent of total revenues to the U.S. Postal Service in exchange for permission to operate vending facilities on the agency’s properties. However, Education has not prohibited such practices or required the Postal Service or other federal agencies charging commissions to obtain written approval. Furthermore, officials in Kansas have chosen not to dispute it. According to agency officials, Education has never approved such a limitation and cannot routinely monitor state-level or vendor- specific business negotiations, but would intervene to bring the parties together in an attempt to resolve disputes or make clear the requirements of the Randolph-Sheppard Act. Although Education has exercised little oversight for this program, the four SLAs that we visited had certain procedures in place that should, if consistently operated along with other certain complementary processes and procedures such as management’s monitoring of performance over time, help safeguard program assets. SLA officials obtained cash register receipts, daily reports on business activities, or monthly reports submitted by the vendors to review the financial operations for these programs. However, audits of programs in other states have reported certain issues relating to the accountability of state-operated programs under the Randolph-Sheppard Act. For example, the Michigan Auditor General reported that SLA staff did not comply with established equipment inventory control procedures for program equipment and could not account for equipment inventory, placing inventory at risk of misappropriation. Further, the California State Auditor reported that, among other things, the SLA has not followed up on missing financial reports from licensed vendors and has not been able to monitor licensed vendors’ financial problems properly. In addition, the auditor found that the SLA was not adequately pursuing past-due commissions owed to the program by private businesses operating vending machines on federal properties. The Committee for Purchase has established procedures for monitoring and overseeing the JWOD program, but has delegated most of the responsibility for monitoring participating JWOD nonprofit agencies to two central nonprofit agencies. As of April 2006, NIB officials reported that they worked with 81 participating JWOD nonprofit agencies that employed individuals who are blind, and NISH officials reported that they worked with 552 JWOD nonprofit agencies that employed individuals with other severe disabilities. In particular, although the Committee for Purchase must approve the nonprofit agencies’ participation in the program, it relies on the central nonprofit agencies to certify that: 75 percent or more of the direct labor hours under JWOD contracts are performed by individuals who are blind or have severe disabilities, and if not, that there is a suitable plan in place to bring this percentage up to the required level; agencies maintain required documentation for each of these agencies function as nonprofit entities serving individuals who are blind or have severe disabilities; agencies have a required job placement program; and agencies comply with applicable occupational safety and health standards. The Committee for Purchase requires that the JWOD nonprofit agencies certify annually that they are in compliance with program requirements but does not routinely verify this information, relying instead on the central nonprofit agencies to do so. According to agency officials, the Committee for Purchase performs about 20 field visits annually, visiting up to 3 agencies per visit, or about 60 of the more than 600 participating nonprofit agencies. At this rate, the Committee for Purchase is unable to satisfy its own requirements to perform on-site compliance reviews at each fully compliant participating nonprofit agency every 5 years. The Committee for Purchase’s regulations create at least two problems for NIB and NISH: the potential for a conflict of interest resulting from a lack of organizational independence as well as disincentives to perform their monitoring duties effectively. Specifically, these regulations require that NIB and NISH, on behalf of the Committee for Purchase, monitor the compliance of JWOD nonprofit agencies, but, at the same time, represent them in their dealings with the Committee for Purchase. Moreover, the regulations also permit NIB and NISH to charge a fee based on JWOD nonprofit agencies’ sales to the government that does not exceed the limit set by the Committee for Purchase, and require the nonprofit agencies to pay that fee in order to remain in good standing in the program. This system of compensation may create a disincentive for NIB and NISH to identify instances of noncompliance that could result in the JWOD nonprofit agency losing its contract, especially for those JWOD nonprofit agencies that are generating large volumes of JWOD sales. Finally, although the regulations and procedures provide for a number of duties that the central nonprofit agencies must perform, they do not specify actions the Committee for Purchase can take if the central nonprofit agencies fail to execute these duties. NIB and NISH officials reported that they monitor JWOD nonprofit agencies’ compliance with relevant laws and regulations by conducting on- site reviews of nonprofit agencies every 3 years, and require quarterly statistical reports from the agencies they oversee. The Committee for Purchase has established procedures for these reviews that require each central nonprofit agency to use a standardized review sheet to assess whether the JWOD nonprofit agency is compliant in 11 different program areas, including the percentage of direct labor hours performed by individuals who are blind or have severe disabilities, documentation of an employee’s disability, and an evaluation of whether or not the individual is capable of competitive employment. The on-site reviews are the primary means for NIB and NISH to test the accuracy of the data that the JWOD nonprofit agencies submit, but the scope of the reviews may not be sufficient to provide reasonable assurance of the accuracy of all of the data. For example, NIB and NISH officials reported that they test the accuracy of the data for percentage of direct labor hours by reviewing a sample of case files, but they do not verify other data, such as job placement and upward mobility statistics. Further, they do not always report instances of noncompliance they find to the Committee for Purchase. In the states we visited, reports from NIB’s and NISH’s on-site reviews generally showed that the JWOD nonprofit agencies were in compliance with program requirements, and most files contained the required documentation. Eleven of 13 agencies that we visited provided documentation of the results of their most recent on-site reviews showing they were in compliance. However, in our limited reviews of 137 case files at these 13 agencies, we found that 5 of 8 NISH agencies had at least 1 file that lacked the required medical documentation of a worker’s disability, and that 3 of these 8 NISH agencies had at least 1 file that lacked the required documentation on competitive employment. We also found that one of the five NIB agencies we visited had one case file that lacked the required medical documentation. In sum, 11 percent of the files we reviewed at the NIB and NISH agencies we visited lacked the required medical or competitive employment documentation. A serious instance of noncompliance escaped detection by the responsible central nonprofit agency (NISH) and the Committee for Purchase. In this case, the National Center for the Employment of the Disabled (NCED) in El Paso, Texas, failed to use workers with severe disabilities to perform the required percentage of direct labor hours on its JWOD contracts, which were valued at over $200 million. Instead, NCED inflated its reported percentage by improperly including economically disadvantaged workers. The problems at NCED were detected not through routine monitoring, but rather through an anonymous tip to the Committee for Purchase, and resulted in as many as 1,144 JWOD jobs being lost to individuals who did not have severe disabilities during fiscal years 2004 and 2005. The JWOD nonprofit agency took actions prescribed by the Committee for Purchase to come into compliance, including dividing the agency’s operations into two different units—one for JWOD work and one for commercial activities—and the Committee for Purchase is satisfied with the actions taken. The definition of a severe disability in the law allows for differing interpretations, which may complicate efforts to ensure compliance for agencies that serve individuals who have severe disabilities. The statutory definition of blindness is fairly straightforward: a lack of visual acuity of not more than 20/200 in the better eye with correcting lenses or a limited field of vision of not more than 20 degrees. In contrast, the definition of a severe disability requires a diagnosis of a residual, physical or mental impairment that limits functioning in one of five areas (mobility, communication, self-care, self-direction, and work tolerance or work skills), and a determination that the impairment has rendered the individual unable to engage in normal competitive employment over an extended period of time. Despite the fact that the definition is subject to interpretation, the Committee for Purchase has offered little additional guidance that would clarify when disabilities that may not normally be considered severe could be, such as the conditions under which a recovering alcoholic or a person with diabetes could be considered to have a severe disability. During our review of case files at 13 JWOD nonprofit agencies, we noted instances where it was unclear in the medical documentation that the disability was severe, such as a case in which the individual was diabetic, with no indicated symptoms, and another in which the individual was diagnosed as having an aggressive personality. All four of these programs generally provide training and employment opportunities that might not otherwise be available for individuals who are blind or have severe disabilities. However, two are hampered by weaknesses in performance management and program oversight that signal a need for stronger federal leadership. Absent federal goals for the Randolph-Sheppard program and routine analyses and reports from Education on states’ program operations and performance, little is known about how this program is improving the lives of participants. Having such information about outcomes is an important component of any program, and essential during times of fiscal austerity. Further, by not exercising more oversight and issuing clear guidance to all states on emerging issues that could affect program participants, Education may be missing an important opportunity to help states improve program operations or proactively respond to these issues. While recognizing that there may be increased costs for improved oversight, these costs could be minimized by, for example, monitoring Randolph-Sheppard activities as part of Education’s oversight for the VR program. Although the Committee for Purchase has made significant progress in developing goals for the JWOD program, some of the goals lack key elements—clear measures and performance targets. Also, the current approach for overseeing nonprofit agencies operating under the JWOD program poses difficult challenges for the two central nonprofit agencies in managing the conflicts of interest that may exist because of their lack of organizational independence, and therefore demands strong and effective oversight from the Committee for Purchase. Ensuring program integrity is particularly important for JWOD since nonprofit agencies are given a competitive advantage over private business and industry in the federal procurement system to ensure that opportunities are provided to individuals with severe disabilities. 1. To improve program performance management and oversight, we recommend that the Secretary of Education provide more effective leadership of the Randolph-Sheppard program by: establishing performance goals to identify desired programwide outcomes that assess states’ licensed vendor programs’ performance as a whole in achieving established goals; being more proactive in disseminating clear, consistent and routine guidance about program requirements and prohibited practices to federal agencies and states; and strengthening their monitoring of SLA and Randolph-Sheppard program performance in a cost-effective manner. 2. To improve program performance management, we recommend that the Chairperson for the Committee for Purchase assess goals and measures for JWOD to ensure that they are clear, measurable and continue to capture key aspects of program performance as the Committee for Purchase continues to develop its performance management system. 3. To help ensure that JWOD nonprofit agencies comply with program laws and regulations, we recommend that the Chairperson of the Committee for Purchase improve procedures for overseeing these agencies. This could include requiring the central nonprofit agencies to enter into written contracts with the Committee for Purchase that clearly lay out their oversight responsibilities and the consequences for failing to fulfill them, providing a means of compensating the central nonprofit agencies for their services that provides an incentive for effective enforcement, or having the Committee for Purchase assume greater responsibility for oversight of JWOD nonprofit agencies, by performing more on-site compliance reviews. We received written comments on a draft of this report from the Department of Education and the Committee for Purchase. Education and the Committee for Purchase generally agreed with our recommendations and provided information on activities they had underway or planned. Education agreed that it should provide more effective leadership of the Randolph Sheppard program and commented that the actions we recommended were consistent with the steps the program is taking to improve program administration. Some of the steps Education cited included developing appropriate performance goals, enhancing its efforts to provide clear and consistent guidance, and improving program monitoring. We believe these efforts will help to improve program performance management and oversight. The Committee for Purchase agreed that its performance goals and measures for the JWOD program should be assessed to ensure that they are clear and quantifiable. However, the Committee for Purchase commented that its regulations did not intend for “training” to be taken literally as a mission output and, therefore, the agency did not establish a separate goal for training activities. Rather, the Committee for Purchase stated that it viewed training as an important, but incremental activity that equips persons who are blind or have severe disabilities with the knowledge and skills necessary for employment, which it considers the paramount goal of the program. Also, the Committee for Purchase believes the reporting requirements for establishing a separate goal for training would burden the nonprofit agencies. To avoid confusion over the Committee for Purchase’s goals in the future, the agency plans to clarify its regulations. While we believe that training is key to preparing persons who are blind or have severe disabilities for employment, we can understand the Committee for Purchase’s view that the paramount program goal is employment. Clarifying the regulations regarding the Committee for Purchase’s intent with respect to the role of training may make it clear that a separate goal for training is not essential. With respect to ensuring effective monitoring and oversight, the Committee for Purchase agreed that more guidance was needed to help ensure that JWOD nonprofit agencies comply with program laws and regulations. Additionally, the Committee for Purchase commented that it has recently begun taking steps to address possible conflicts of interest between the two roles played by the central nonprofit agencies. The Committee for Purchase also commented that it is considering establishing other oversight and compliance mechanisms and in its proposed fiscal year 2007 budget included three new positions and additional funding for oversight, compliance monitoring, and program review. We believe the Committee for Purchase’s proposed actions are positive steps toward helping to ensure that JWOD nonprofit agencies comply with program laws and regulations. Education and The Committee’s comments appear in appendixes III and IV, respectively. Both agencies also provided technical comments, which we have incorporated into the report as appropriate. We are sending copies of this report to the Secretary of Education, Chairperson of the Committee for Purchase, relevant congressional committees, and others who are interested. Copies will also be made available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me on (202) 512-7215 if you or your staff have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix V. The objectives of our study were to assess to what extent (1) performance goals and measures have been established for these programs and (2) the agencies responsible for these programs have established adequate procedures for overseeing program implementation and assuring laws and regulations are followed. To determine what performance goals and measures have been established for these programs, we reviewed program laws, guidance, and performance-related documents. To obtain additional information about the performance goals and measures, and determine the extent that the agencies responsible for these programs have established adequate procedures for overseeing program implementation and assuring that laws and regulations are followed, we (1) reviewed federal laws, regulations, and guidance to determine the programs’ requirements; and (2) interviewed agency officials at the Rehabilitation Services Administration (RSA) within the Department of Education, and the Committee for Purchase from People Who Are Blind or Severely Disabled (Committee for Purchase). In addition, we interviewed officials of the two central nonprofit agencies—the National Industries for the Blind (NIB) and NISH—that have been delegated certain oversight responsibilities for Javits-Wagner-O’Day (JWOD) member nonprofit agencies by the Committee for Purchase. To obtain additional information about program goals and measures and oversight for the four programs, we conducted site visits to four states (Arizona, Kansas, New York, and North Carolina). During these visits, we met with state VR agency officials to discuss the Supported Employment, Projects with Industry (PWI), JWOD, and Randolph-Sheppard programs. In addition, we visited 4 PWI grantees, 13 JWOD nonprofit agencies, and 7 Randolph-Sheppard vendors. Several criteria were used in selecting the states to visit. States that were considered had all four of the employment- related disability programs currently present and operating in them. States varied in the administration of their VR programs with about half of all states having two separate programs—one for the general VR and a separate one for the blind—and the remaining states having only one VR program. We selected two states (New York and North Carolina) that had both general and blind VR programs, and two states (Arizona and Kansas) that had one VR program that served all people with disabilities seeking employment-related assistance. In addition, we selected states based on a review of information about the characteristics of NIB and NISH member nonprofit agencies to include both large and small, urban and rural, and different kinds of work performed (products and services). States were also selected to include geographic diversity. During the state visits, we met with officials representing each of the four programs. For Education’s programs, we interviewed local officials of nonprofit agencies with PWI grants, state program administrators for the state-operated Randolph-Sheppard programs as well as licensed vendors, and state VR officials responsible for administering the Supported Employment State Grants program. For JWOD, we interviewed chief executive officers or their representatives of the JWOD nonprofit agencies with current federal contracts to provide goods and/or services. In addition, during these meetings, we collected documentation to ascertain how the federal and two central nonprofit agencies were monitoring their respective programs. We also reviewed the records of 137 workers who were blind or had severe disabilities, selecting some records at each of the 13 JWOD nonprofit agencies we visited. The records were randomly selected from lists of current JWOD workers. A random number generator was used to assign a number to each name on the active roster and records were selected on the basis of the number—starting with the lowest number. During the record review, we assessed whether the agencies’ files of workers who were blind or had severe disabilities contained required medical documentation and assessment of ability to work in competitive employment. We determined that the fiscal and program data we used in this report was reliable for our purposes. To make this determination, we assessed the reliability of fiscal and programmatic data by interviewing officials knowledgeable about the data and the steps they take to ensure accuracy. For Supported Employment, prior GAO work had systematically tested relevant variables, including all 22 variables of the services provided. In addition, for this engagement we obtained documentation from two states (Kansas and New York) that described the states’ procedures for checking the reliability of their data. Programmatic data collected by the Committee for Purchase for the JWOD program, and Education for the PWI and Randolph-Sheppard programs were self-reported by local program officials. For example, reviews of the information reported by officials for the PWI program were generally performed by project managers. For fiscal reporting, however, we found that JWOD nonprofit agencies, PWI grantees, and licensed vendors generally had systems and procedures for stronger accounting of financial data. For example, state licensing agency (SLA) officials used cash register receipts and routine reports on business activities submitted by the licensed vendors to the SLAs to review the financial information for these programs. In addition, licensed vendors were also required to complete merchandise inventories at least once each year. We also reviewed other available reports on the Randolph-Sheppard program of the states visited, and reviewed the findings and recommendations of state audit reports from four other states—California, Georgia, Michigan, and South Carolina—that had evaluated all or certain aspects of their state-operated Randolph-Sheppard programs in recent years. In addition, we interviewed officials of agencies engaged in disability research and advocacy at the national level to learn more about each of the objectives. These organizations were the Council of State Administrators of Vocational Rehabilitation, Disability Policy Collaboration, Easter Seals, Goodwill Industries International, National Council on Disability, and the National Council of State Agencies for the Blind. We also met with officials of the General Services Administration, a federal procurement agency and partner of the JWOD program. We conducted our work between March 2006 and December 2006 in accordance with generally accepted government auditing standards. Percentage increase in direct labor hours performed by people who are blind or have other severe disabilities on JWOD products and services. Percentage increase in the number of people who are blind or have other severe disabilities employed in direct labor positions on JWOD products/services. Percentage increase in the number of people receiving benefits versus not receiving benefits. Percentage decrease in the number of people receiving less than the federally- mandated minimum wage or Service Contract Act wage rate, segmented by disability. The number of employees who are blind or have other severe disabilities who are promoted into a direct labor job, other than supervisory or management positions, which includes increased wages and/or fringe benefits, not attributed to cost of living or productivity increases of less than 20 percent. Promotions can be movement between JWOD and non-JWOD jobs. The number of employees who are blind or have other severe disabilities who are promoted into an indirect labor job requiring supervisory, management, or technical skills, that included increased wages and/or fringe benefits, not attributed to cost of living raises. The number of employees who are blind or have other severe disabilities who leave the nonprofit agency through competitive or supported employment placements. Partner with federal customers to increase customer satisfaction and loyalty, so that the JWOD program becomes their preferred source for products and services. Federal agency scorecard that evaluates the level of satisfaction with JWOD products, services and/or customer experience among key federal agencies, using a stoplight or similar summary format. Time to resolve customer questions or complaints received via the central customer feedback mechanism(s) or other means of communication. Increased customer satisfaction with quality, timeliness, and price, based on customer surveys and/or alternative qualitative research (e.g., focus groups). Improve efficiency and effectiveness of the JWOD program by streamlining and automating processes and procedures, and improving communication, while continuing to ensure program integrity. Overhead cost as a percentage of JWOD program direct labor hours, calculated as total Committee for Purchase budget plus central nonprofit agencies’ operating and supporting costs (excluding capital expenditures), divided by total number of direct labor hours, segmented by overall program (Committee for Purchase plus central nonprofit agencies’ overhead), National Industries for the Blind (NIB) and NISH. Reduction in the cycle time for the addition of a new JWOD product or service to the procurement list. Percentage increase in sales of products through commercial distribution channels, segmented by product category. Ranking of commercial distributors, evaluated against consistent program performance expectations, including compliance with the Committee for Purchase’s Essentially The Same (ETS) requirements, segmented by product category. Milestone tracking of evaluation of commercial distribution processes, including staff resources and financial resources. Percent of information technology projects on which the Committee for Purchase, central nonprofit agencies, and nonprofit agencies collaborated to increase efficiency and exchange of information. Decrease in the percentage of JWOD nonprofit agencies found out of compliance, segmented by reason. Consider a future measure linked to the results of governance and executive compensation actions. Expand awareness, understanding, and preference for the JWOD program within the public, Congress, federal agencies, the disability community, and other JWOD stakeholders through effective communication and information sharing. Effectiveness of communication and information sharing measured by increased percentages in awareness, familiarity (understanding) and preference, segmented by key stakeholders. Milestone tracking of annual update and implementation of a plan that addresses communication and information sharing with and among both internal and external stakeholders. Analysis of program-level communications execution to ensure that program resources are used in support of the strategic communications plan. Analysis of program-level publications, events, and other communications tools to evaluate message alignment. Facilitate nonprofit agency adoption of program messaging and branding. Awareness, understanding, preference among federal customers, segmented by Department of Defense and civilian agencies. Awareness, understanding, preference for “the disability community,” comprised of government policy makers, academia, and private membership or advocacy organizations for people who are blind or have other severe disabilities. Among members of congressional committees or subcommittees with oversight or other significance for the JWOD program, number who have been educated about the JWOD program and/or actively engaged with their local JWOD- participating nonprofit agency(ies). Strategically develop new markets and expand existing markets in which the JWOD program can provide best value products and services to federal customers in order to expand employment opportunities that meet the needs of people who are blind or have other severe disabilities. Milestone tracking of establishment and implementation of a program market development plan that addresses existing customers, existing products/services, new customers, and new products/services. Percent increase in the employment of people who are blind or have other severe disabilities under the JWOD program, measured in (1) actual direct labor hours, (2) actual jobs, (3) projected direct labor hours on procurement list additions, and (4) projected jobs on procurement list additions by key market segment. JWOD goal achievement, by agency and overall federal government. Milestone tracking of establishment and implementation of a strategy for greater cooperation between JWOD and the small business community, which may explore counting appropriate JWOD awards toward the annual Small Business procurement goals and/or the federal government’s inclusion of disability-owned businesses within the small business measure categories (relates to leveraging the JWOD program to create additional jobs in the commercial sector). Milestone tracking of implementation of a strategy for greater cooperation between JWOD and Randolph-Sheppard programs. Milestone tracking of establishment and implementation of a strategy for greater cooperation with service-disabled veterans’ businesses. The following individuals made important contributions to this report: Shelia D. Drake, Timothy Hall, Regina Santucci, Don Allison, Rachael Valliere, Daniel Schwimer, Walter Vance, and Robert Owens.
Congress has created 20 federal employment-related programs that are aimed at helping people with disabilities obtain jobs. Little is known about the effectiveness and the management of some of these programs. GAO was asked to review four of these programs; the Department of Education (Education) oversees three--Projects with Industry (PWI), Supported Employment State Grants, and Randolph-Sheppard. An independent federal agency, the Committee for Purchase, oversees the fourth, Javits-Wagner-O'Day (JWOD). Specifically, GAO assessed the extent to which (1) performance goals and measures have been established for these programs and (2) the agencies responsible have established adequate oversight procedures. We reviewed program planning and performance information, interviewed agency officials, and visited each of the four programs in four states. Three of the four programs have federal performance goals. No federal performance goals or measures currently exist for the Randolph-Sheppard program, which provides opportunities for individuals who are blind to operate vending facilities on federal properties. Without goals, it is difficult to assess the program's performance, but Education officials told GAO they are developing them. Education has a goal and a measure for the Supported Employment State Grants program--a federal grant program that provides job coaching and other support to help individuals with severe disabilities secure jobs. The goal indirectly measures the program's performance because grant funds are mixed with other funding sources to provide supported employment services. Education has also developed one goal for the PWI program--a federal grant program that helps individuals with disabilities obtain competitive employment--that is consistent with the mission of the program. The goal is to create and expand job opportunities for individuals with disabilities in the competitive labor market by engaging business and industry, and one of the measures tracks the percentage of individuals placed in employment in work settings making at least minimum wage. The Committee for Purchase, which oversees the JWOD program--a program that helps to create jobs through the federal property management and procurement systems--first developed federal goals and measures for its fiscal year 2005-2007 strategic plan and has since revised them. The revised measures still have limitations, such as not being clearly defined or being difficult to measure. Education's and the Committee for Purchase's oversight of the four programs has been uneven. Education has established procedures, such as on-site reviews, for the PWI and Supported Employment State Grants programs that, if consistently followed, would provide reasonable assurance that the programs are in compliance with applicable laws and regulations. However, Education conducts limited oversight of the Randolph-Sheppard program. For example, Education does not routinely analyze or report the data it collects from states and has provided little guidance to ensure states comply with laws or consistently interpret program requirements. One area in which Education has not provided sufficient guidance is the circumstances under which federal agencies may charge fees to licensed vendors operating vending facilities on their properties. As a result, vendors in some locations were paying commissions or fees but those in other locationswere not. Finally, the Committee for Purchase delegates most of its oversight responsibilities to two central nonprofit agencies that also represent the interests of the JWOD nonprofit agencies they oversee. This arrangement, as well as the fact that they receive a percentage of the total value of the contracts from the JWOD nonprofit agencies, raises questions about their independence and gives them little incentive to identify instances of noncompliance that could result in the JWOD nonprofit agency losing its federal contract.
For the purposes of this report, an air-rail connection refers to a connection between an airport terminal and an intercity passenger rail station (in other contexts, an air-rail connection may refer to a connection between an airport terminal and an intracity rail station that serves other forms of local rail, such as commuter rail or a subway system). An air-rail connection facilitates mobility between a rail station and an airport terminal through a variety of modes and methods, such as an airport shuttle, local transit connection, automated people mover or guideway car, or by walking. Depending on the extent of the connectivity, intercity passenger rail can perform three main roles for air passengers. First, intercity passenger rail may serve as a short-distance connection to the nearest local airport from a metropolitan area along a more extensive intercity rail corridor. Second, intercity passenger rail may serve as a competitive alternative to air travel. For example, for distances less than 500 miles, our prior work has shown that intercity passenger rail, particularly high-speed rail, offers some potential advantages over air travel, including reduced times for security screening and baggage checks. Third, intercity passenger rail can serve as part of an integrated intercity transportation solution with air travel, where the passenger travels significant distances using both modes. For these types of air- rail connections, travel may be further integrated by code-sharing, which refers to the practice of airlines applying their names and selling tickets to rail service operated by other organizations, such as Amtrak. Amtrak provides intercity passenger service to 46 states and the District of Columbia, operating over a 22,000-mile network, mainly using track owned by freight railroads. Amtrak owns about 655 miles of rail lines, primarily on the Northeast Corridor between Boston, Massachusetts, and Washington, D.C. Most of Amtrak’s passengers travel within the Northeast Corridor or over relatively short-distances, though Amtrak also operates a number of long distance routes across the country. The speed of service varies across the country. For example, according to Amtrak, its Heartland Flyer service connecting Oklahoma City, Oklahoma, and Fort Worth, Texas, averages about 50 miles per hour (mph) over the 206- mile corridor while its Acela Express higher-speed service averages less than 80 mph throughout the Northeast Corridor (reaching top speeds up to 150 mph). While Amtrak’s Acela Express service is currently the fastest intercity passenger rail service in the United States, California has begun developing a 520-mile high-speed rail line designed to operate at speeds up to 220 mph. Transportation projects at airports are typically initiated and developed by local transportation agencies, including some combination of state departments of transportation, local planning bodies, and other local agencies. While roles may vary, one or more state and local transportation agency will generally take the lead in project development and implementation. Airports typically are also heavily involved with developing intermodal capabilities on airport property. This is especially true if the project involves construction of a major intermodal facility. For example, the Miami International Airport, working in cooperation with the Florida Department of Transportation, has been one of the leaders in the development of the Miami Intermodal Center, which will provide on-site access to Amtrak, multiple other rail systems, local transit services, and a rental car center through the use of an automated people mover. Airlines also play a role in developing intermodal projects at airports. Use and lease agreements between airlines and airports are a major revenue source for most large airports, and because of this financial arrangement, airlines may have influence in or participate in airport decision making. The ability of airlines to participate in decision making depends on the specific airport and the structure of the lease agreements between the airport and airlines serving that airport. Amtrak generally becomes involved in the planning process at airports when a state or local government proposes a project that could potentially affect its intercity passenger rail service. An automated people mover is a guided transit mode with fully automated operation, featuring vehicles that operate on “guideways” with exclusive right-of-way, such as an automated monorail system. development. Additionally, FAA’s 2012 reauthorization legislation directs the Secretary of Transportation to encourage airport planners to consider passenger convenience, airport ground access, and access to airport facilities during the development of intermodal connections on airport property. Similarly, the Passenger Rail Investment and Improvement Act of 2008 (PRIIA) authorized development of high-speed intercity passenger rail corridors and the American Recovery and Reinvestment Act of 2009 (Recovery Act) appropriated $8 billion to fund development of these corridors and intercity passenger-rail projects. In June 2009, the Federal Railroad Administration (FRA) established the High-Speed Intercity Passenger Rail (HSIPR) program that provides discretionary grants for high-speed or intercity passenger rail projects. In allocating funds, PRIIA directed FRA to give greater consideration to projects that, among other things, encourage intermodal connectivity among train stations, airports, subways, transit, and other forms of transportation. However, federal policy for surface transportation, aviation, and passenger rail is established through separate legislation. For example, the planning and funding for highway and transit projects are addressed under the Moving Ahead for Progress in the 21st Century Act,planning and funding of U.S. airports is addressed under the FAA Modernization and Reform Act of 2012, and the planning and funding for intercity passenger rail is addressed under PRIIA. While the federal government does not provide funding specifically for air- rail connections, it has established a number of other funding mechanisms that can be used to enhance elements of air-rail connectivity. (See app. III.) Most federal funding for transportation projects is provided through grant programs through the individual specific modal administration and reserved for improvements specific to that mode. For example, most direct federal financial support for airport capital projects has been provided through grants from FAA’s Airport Improvement Program (AIP). While AIP grants may be used to fund intermodal projects, an airport’s use of its funds is generally restricted to an airport project that is owned or operated by the airport sponsor and that is directly and substantially related to the air transportation of passengers or property. Airports have funded portions of light rail and transit (such as subway or bus) using AIP funds at airports meeting these restrictions. Funding for intercity passenger rail has been provided in the form of operating and capital subsidies to Amtrak, as well as the HSIPR grant program. Federal oversight of air-rail projects is primarily divided across DOT’s respective modal administrations, though DOT has established some practices to coordinate oversight of intermodal projects. For example, for an air-rail connection project, the aviation component is overseen by FAA, while the rail component is overseen by FRA. As another example, according to DOT, its Research and Innovative Technology Administration (RITA) works closely with DOT’s modal administrations to improve intermodal cooperation, solve transportation challenges that cut across modal boundaries, and remove barriers to intermodal projects In addition to these efforts, in 2012 through a variety of research efforts. DOT established a working group consisting of representatives from each modal administration to track intermodal initiatives and projects. The goal of the working group is to provide non-monetary resources such as recommendations of policies to promote intermodal transportation projects, including air-rail connectivity projects. RITA is responsible for coordinating, facilitating, and reviewing DOT’s programs and activities to identify research duplication and opportunities for joint efforts and to ensure that research, development, and technology activities are meeting intended goals. European Commission has periodically published a common transportation policy in response to increased ground and air congestion, as well as concerns about the dependence on oil and the level of carbon emissions resulting from the current transportation system. A key component of the European Commission’s transportation policy is improving the connections between air and rail, thereby transforming competition between those modes into complementary service using high-speed train connections located at European airports. The current European Commission transportation policy, adopted in 2011, aims to connect all 37 core airports to the rail network, preferably through high- speed rail, and shift a majority of medium-distance passenger transportation (which the European Commission defines as under 300 kilometers or 186 miles) to the passenger rail network by 2050. Beyond these policy differences, our prior work has also noted that differences related to population density, geography, and private automobile use have contributed to differences in the development and use of air-rail connections in Europe compared to the United States. This prior work has highlighted the greater population density of European cities and that downtowns are major destination points for passengers as key differences that affect the use of intermodal systems. While some U.S. cities have population densities comparable to European cities, in general, U.S. cities are more decentralized. Furthermore, distances between many major cities in the United States are generally greater than in Europe, which can affect the ability of intercity passenger rail to be competitive with air travel, depending on price and the speed of service. In addition, private automobile use has affected air-rail connections. Specifically, the rate of car ownership is generally higher in the United States compared to Europe, while at the same time, retail gasoline prices in the United States are much lower than in Europe because of substantially lower taxes. Furthermore, in the United States, surface transportation policy has primarily focused on developing and improving highways, while the transportation policy of European countries have placed a greater comparative emphasis on the development of intercity passenger rail and public transportation. Accordingly, people traveling to airports in the United States are more likely than in Europe to drive and park their cars at the airports, which could reduce the demand for (as well as the benefits of) intercity passenger rail connections at U.S. airports. Beyond Europe and the United States, the integration of air travel and intercity passenger rail varies. For example, in Japan, air service and high-speed intercity passenger rail compete and do not complement each other as in Europe. The uniqueness of Japan’s transportation system stems from the fact that two-thirds of its population, or almost 100 million people, live in a narrow, densely populated corridor. Furthermore, Japan has nearly 5,600 miles of private tollways, which makes intercity travel by car expensive. In China, the Shanghai Railway Bureau and China Eastern Airlines commenced operations of air-rail combined services in May 2012 to and from Shanghai Hongqiao International Airport, marking China’s first air-rail combined service. The service allows passengers to transfer between domestic or international air services and train operations with a single ticket. Most major U.S. airports have some degree of physical proximity to intercity passenger rail stations; however, few are collocated with rail stations. Specifically, our analysis found that 42 of the 60 large and medium hub airports in the contiguous United States are located within 10 miles of an Amtrak station; 21 of the 42 airports are within 5 miles of a station. (See fig. 1.) Newark Liberty International Airport and Bob Hope (Burbank) Airport are the only airports where passengers can access the Amtrak stations via an automated people mover (Newark) or by walking (Burbank). Airline passengers at Miami International Airport will be able to connect to Amtrak via an automated people mover upon completion of the Miami Central Station in 2014. Amtrak officials noted that, in some locations, it provides service that may operate in close proximity to an airport, but may not have an Amtrak station near that airport. Passengers at the nation’s other major airports have to rely on another transportation mode such as shuttle, taxi, or transit (intracity rail, subway, or bus) to connect to an Amtrak station and some passengers must make multiple connections. For example, passengers at Baltimore/Washington International Thurgood Marshall (BWI) and Milwaukee’s General Mitchell International can take a free airport shuttle to and from Amtrak stations, while passengers choosing to take public transportation to access Amtrak from Norman Y. Mineta San Jose International Airport would have to take both a free shuttle and light rail. However, some officials we interviewed told us that passengers are less willing to consider intermodal travel as the number of modes needed to complete a single trip increases. Stakeholders at many of the airports we visited have placed a greater emphasis on intracity connectivity (or connections within a local metropolitan region) to the airport through local rail or other transit, as opposed to connectivity through intercity passenger rail. While a local transit system may provide a connection between an airport and intercity passenger rail, such a connection is generally not the primary goal. For example, at Dallas/Fort Worth International Airport, officials are working with the Dallas Area Rapid Transit agency to provide an intracity rail connection to the airport from downtown Dallas by 2014. Officials noted that an intracity rail connection was preferable to connectivity through Amtrak because of the limited frequency of service provided by Amtrak in the region, among other factors. When the extension is completed, airport passengers would be able to connect to the Amtrak station located in downtown Dallas through the intracity rail connection. Similarly, officials at Norman Y. Mineta San Jose International Airport in California noted that policymakers should focus on connecting intracity rail to their airport, rather than intercity passenger rail, in part, because the San Jose airport is not a hub airport and most of its customers reside in the surrounding San Francisco Bay area. Amtrak and state transportation agencies are considering projects to expand connectivity with airports. Amtrak’s strategic plan states that it will increase connectivity with airports in key markets and has established a strategic goal to increase the number of air-rail connections in the Northeast Corridor from two to five by 2015. However, Amtrak officials we spoke with stated that they do not believe Amtrak will achieve this goal because of limited available funding for intercity passenger rail. Some states, such as California, Illinois, and Texas, are looking at options to enhance air-rail connectivity by developing high-speed rail connections at nearby large and medium hub airports. For example, in addition to Illinois’ development of high-speed rail between Chicago and St. Louis, several options for possible future opportunities for improving Amtrak passengers’ connectivity to Chicago O’Hare International Airport have been proposed. Studies and data, while limited, suggest that relatively few passengers and airport employees use the limited air-rail connections available to travel to and from U.S. airports. Ground access studies have shown that intercity passenger rail is rarely used to connect to airports compared to other modes of transportation. For example, a 2012 study stated that Amtrak accounted for 3 percent of ground access mode share at Newark Liberty International; 2 percent at BWI, and less than 1 percent at Bob Hope Airport. By comparison, another study observed that at some European airports with direct air-rail connections, long-distance intercity passenger rail accounts for 20 to 25 percent of the ground access mode share.for public transportation options to airports is limited, as the vast majority of passengers still use personal automobiles to access the airport. The only current code-sharing agreement for air and rail travel in the United States is at Newark Liberty International Airport, though code- sharing has been implemented or explored at other airports. The code- sharing agreement between United Airlines and Amtrak allows passengers to make reservations with United Airlines for both air and rail travel, and Amtrak provides the connecting service on its trains between Philadelphia, Pennsylvania; Wilmington, Delaware; Stamford or New Haven, Connecticut, and to anywhere United Airlines flies from Newark Liberty International Airport. According to Amtrak data, about 24,000 passengers a year take Amtrak to Newark to connect to United Airlines flights, with 90 percent of those passengers originating from Philadelphia. However, United Airlines representatives pointed out that most passengers at the Newark Liberty International Airport rail station—which Amtrak estimated at over 120,000 passengers in fiscal year 2012—are not traveling through the code-share agreement. No additional code share agreements are currently planned between Amtrak and other airlines we contacted. Representatives from the airlines and Amtrak told us that code-sharing agreements are generally most effective when the rail station is located at the airport and within a high-traffic rail corridor, which is the case with Newark Liberty International Airport and the Northeast Corridor. As previously noted, few rail stations are collocated with a major airport. Both airline and Amtrak officials indicate that for code-share agreements, airlines require frequent rail service with minimum passenger transfer time between modes. Amtrak officials stated that they provide that frequency of service in very few markets, generally located on Amtrak’s Northeast Corridor serving highly populated metropolitan areas. We found that air-rail connectivity has the potential to provide a range of mobility, economic, and environmental benefits. In our discussions with stakeholders, including state departments of transportation, local transportation-planning organizations, and airlines; our review of academic literature; and the expert opinions obtained from our survey, we found that a general consensus exists that air-rail connectivity can provide a range of mobility benefits for travelers; however, we found less agreement exists on the importance and extent of other types of benefits, including economic and environmental benefits. Table 1 shows the benefits most frequently cited as “very important” by the experts, five of which focus on mobility benefits. However, our review suggests that the particular benefits for a given project are generally site-specific, and depend on the particular characteristics of the rail operators, the airports, and underlying regional characteristics. As a result, the benefits we identified through our work are not generalizable to all air-rail connections. Air-rail connections can potentially provide mobility benefits, such as increased options for passengers connecting to the airport, and improved convenience for airport and airline customers. Specifically, over half of the experts responding to our survey agreed that increasing passenger convenience and travel options were “very important” benefits of air-rail connectivity, and airport representatives cited both benefits as driving factors for intermodal projects at a number of our site visits. For example, representatives at Miami International Airport noted that in the 1980s a lack of ground transportation options, including connectivity to rail, had reduced passenger traffic at the airport. Beginning in 2001, the Florida Department of Transportation began to construct an intermodal center, which will provide passenger access to the airport through multiple ground transportation modes, including intercounty and intercity passenger rail. According to airport representatives, directly connecting Amtrak service to the airport will provide an additional option to passengers connecting to the airport and encourage passengers to be more willing to try other non-automotive forms of transportation. Construction of the new Amtrak terminal (Miami Central Station) began in 2011, and representatives anticipate the terminal will be completed in 2014. (See fig. 2.) Furthermore, air-rail connections can provide airport access to commuter trains in addition to intercity trains operated by Amtrak, as many of the Amtrak stations located near airports are served by both types of services. In addition, rail connectivity to airports has the potential to improve the passenger experience traveling to the airport. In particular, half of the experts (22 of 41) rated increased reliability of travel to the airport, and nearly half (18 of 40) rated reductions in the travel time to and from the airport as very important benefits of air-rail connections. Representatives from the airlines and airports we interviewed noted that their employees might also similarly benefit from an air-rail connection, specifically by providing increased options to and from the airport and improved convenience for airport and airline employees. However, representatives from one airline cautioned that the extent of any benefits would depend upon the cost of the air-rail connection and how such a connection was funded. Air-rail connections also have the potential to provide economic benefits for some transportation operators, such as an increased customer base. We found that some of the experts (16 of 40) participating in our survey and a majority of the stakeholders at six of our eight site visits highlighted the potential for intercity rail to access populations outside of the major metropolitan area served by a large or medium hub airport. Specifically, the experts and stakeholders noted that an air-rail connection may increase an airport’s or airline’s passenger base by attracting additional passengers from outside an airport’s local market, thus potentially generating additional revenue for airports and airlines in that metropolitan area. Some studies suggest that the existence of an air-rail connection affects a passenger’s choice of airport in areas where multiple options exist. In particular, a recent study of passengers using Amtrak to connect to General Mitchell International Airport in Milwaukee found that approximately one-third of passengers reported that they would have used one of the two Chicago area airports if the Amtrak-Mitchell Airport connection was not available. In addition, Amtrak service can also complement existing rail connections made by commuter rail, offering additional frequencies between points served by the commuter trains. However, where transit already offers a connection between a city center and airport, stakeholders at two of our eight site visits noted that an intercity passenger rail connection to the airport may potentially compete with transit service in the same area, thus limiting any increase in airport or airline customers and benefits from enhanced connectivity. In addition, air-rail connectivity could allow for the substitution of rail service for short-haul flights, freeing up capacity for long-haul flights and reducing airport and airspace congestion, though the importance of this benefit varies depending on the airport and the rail service’s operating characteristics. Specifically, nearly half of the experts (19 of 41) in our survey and stakeholders at three of our eight site visits noted that the potential replacement of short-haul flights by rail was a “very important” potential benefit of air-rail connectivity. Our prior work has found that intercity passenger rail, particularly high-speed rail, could serve as a substitute for air service for distances of up to 500 miles. Our previous work on intercity passenger rail has found that for rail transportation to capture the market share necessary to reduce air travel congestion, the distance between cities must be short enough to make rail travel times competitive with air travel times (at comparable costs and levels of comfort). In practice this has been observed to a great extent in the Northeast Corridor, where a number of major urban areas are located within close proximity and where there are significant constraints on the capacity within the air transportation system. For example, Amtrak’s share of the air-rail market for trips between Washington, D.C., and New York City has increased from 37 percent to 75 percent since the introduction of the higher speed Acela Express service in 2000. However, studies of air-rail connections in other countries suggest that the complete abandonment of air service in response to the introduction of rail service serving the same markets is rare. Furthermore, this benefit may be limited given that most airports in the United States are not currently capacity-constrained, though we have previously reported that FAA projects that a number of airports will be significantly capacity-constrained and thus congested within the next 15 years. For example, officials from Chicago O’Hare International Airport stated that because their airport is not capacity-constrained, the benefits from a direct connection with Amtrak would be limited. Amtrak officials noted that they are exploring options to connect to Chicago O’Hare International Airport, but noted that it was premature to speculate on the benefits of such a connection, particularly given Amtrak’s ongoing efforts to upgrade track speeds to major cities from Chicago. Over one-third of the experts participating in our survey rated environmental benefits, including reduced carbon emissions (17 of 41), and reduced energy use (15 of 40), as “very important” benefits of air-rail connectivity. For the European Commission, enhancing air-rail connectivity has been embraced as part of its strategy to reduce greenhouse gases, including carbon emissions, by 60 percent by 2050 while improving mobility. However, academic studies vary on the extent to which environmental benefits can be achieved from increased air-rail connectivity. For example, energy savings from high-speed rail connectivity may depend, in part, on the extent that passengers use rail to connect to the airport rather than other automotive transportation. Studies have also suggested that the substitution of long-distance flights for short-haul flights that have been replaced by rail service could potentially increase carbon emissions. Expanding the current intercity passenger rail network and connecting it to airports would be expensive. However, the costs of facilitating connections between intercity passenger rail stations and airports could vary significantly, depending in part on the complexity and scope of the project. (See table 2.) Air-rail connectivity efforts may be as simple as providing shuttle bus service between the Amtrak station and the airport terminal or as complex as relocating the intercity passenger rail station closer to the airport and integrating it into a multimodal transportation center. For example, BWI Airport operates a free passenger shuttle between the nearby Amtrak station and the airport terminal, at a cost of $2 million per year. In addition to the shuttle service, the Maryland Transit Administration has used $9 million from the HSIPR grant program to make BWI Airport Amtrak station improvements, including planning for track and rail station upgrades. In contrast, the development of the Miami Intermodal Center—which includes construction of a rail station collocating Amtrak, commuter rail, and heavy rail transit access at Miami International Airport, a rental car facility, and an automated people mover—is estimated to cost approximately $2 billion. Depending upon the scope of new infrastructure, project costs may include constructing stations, structures, signal systems, power systems, and maintenance facilities; relocating utilities; and obtaining rights-of-way, among other things. In addition to infrastructure costs, on-going operation and maintenance costs can be high for states and local transportation agencies. For example, airport officials estimate that the automated people mover system that connects Newark Liberty International Airport and the nearby Amtrak station costs $26 million per year to operate and maintain. Furthermore, PRIIA requires that operating and capital costs be allocated among the states and Amtrak in connection with the operation of certain Amtrak routes. Absorbing such costs could be challenging for states and localities as they continue to face near-term and long-term fiscal challenges resulting from increasing gaps between revenue and expenditures. In addition to the direct financial costs of constructing, operating, and maintaining air-rail connections, economic costs may arise due to impacts on other transportation modes. For example, representatives from the Association of American Railroads noted that there is limited additional capacity on the freight rail lines shared between Amtrak and the freight railroads. Accordingly, these representatives stated that any additional intercity passenger traffic initiated to enhance air-rail connectivity on existing freight rail lines could increase the cost and reduce the timeliness of freight shipped on these lines. In such an event, Amtrak and the freight railroads may have to revisit agreements over the usage of the freight rail lines, which can be a lengthy and costly process for all stakeholders. Alternatively, Amtrak or other intercity passenger rail service operators may need to acquire additional right-of-way and construct additional tracks to accommodate increased connectivity between airports and intercity passenger rail, which, as discussed previously, could increase the cost of providing air-rail connectivity. Similarly, representatives from two of the four airlines we interviewed stated that developing intercity passenger rail service that provides an alternative to air travel could affect their profitability. As with many large capital projects, committing financial resources for air- rail projects may also impose opportunity costs as a result of delaying or deferring other projects or initiatives. Specifically, the financial cost of air- rail connectivity projects could affect the ability of governmental entities to pursue other types of transportation projects, particularly in the current fiscal environment. For example, one airline representative we interviewed noted that air travel is in direct competition for resources with other modes of transportation and suggested that any federal funds provided to enhance air-rail connectivity could come at the expense of funding for other programs, including the Next Generation Air Transportation System (NextGen) air traffic control modernization initiative. Given the high potential costs of air-rail connections, it is likely that only a limited number of places could demonstrate potential benefits high enough to justify improved air-rail connectivity investments. For example, if air passengers could access a nationwide rail network directly at an airport, some passengers might travel to that airport from other cities by train rather than on highways or short-haul flights, which might reduce highway or airport and aviation congestion. However, the demand for such service is likely to be low except in a few highly congested travel corridors, such as the Northeast Corridor, where the distances are short enough to make rail travel times competitive with air travel times. At airports that do not have substantial highway or airport congestion, such benefits would not be realized. There might still be some emission and energy benefits, but since the number of travelers likely to use these facilities at such airports is limited, these benefits will be limited as well. Amtrak officials noted that costs and benefits are relative to the scope and complexity of each air-rail connectivity option. For example, they noted that providing an air-rail connection that serves both intercity and local commuter rail, such as those provided by many of Amtrak’s airport- adjacent stations, can provide benefits that might not be justified if the station was served only by intercity rail. Furthermore, Amtrak officials noted that exploring air-rail integration early during the planning and development of an airport can help reduce the overall cost of developing air-rail connectivity, while still achieving substantial mobility benefits. Based on input from our expert survey; discussions with stakeholders, including state departments of transportation, local transportation planning organizations, airports, and airlines; and our review of academic literature, we identified five categories of factors that can greatly affect air- rail connectivity, including the degree of leadership and collaboration among stakeholders, resource availability, the extent of passenger demand for air-rail connectivity, the ease of the air-rail connection, and the passenger rail service operating characteristics. (See table 3.) The degree of leadership and the extent of stakeholder collaboration across air-rail projects can affect project development. Specifically, almost half of the experts (18 of 40) rated the lack of leadership as greatly hindering air-rail connections. Stakeholders we interviewed during our site visits told us that when there is an absence of leadership, stakeholders are unlikely to assume roles outside of their typical responsibilities and interests, a limitation that makes project development more difficult. Conversely, leadership that helps build bridges across stakeholder groups can help develop a shared vision and foster collaboration, thereby facilitating project development. However, we found there is limited federal leadership for air-rail projects, and no modal administration has a primary responsibility to oversee air- rail projects, as responsibilities for transportation projects are segmented by mode. Furthermore, according to an academic study and stakeholders we interviewed, the United States is lacking a national policy framework and vision to guide investment in the needed infrastructure to develop air- rail connections. For example, FRA’s High-Speed Rail Strategic Plan does not address connectivity between airports and intercity passenger rail. In addition, while DOT’s 2012-2016 strategic plan broadly discusses connectivity between airports and intercity passenger rail, DOT has not established any specific goals for air-rail connectivity.with our previous work that concluded that the absence of specific This is consistent national goals to develop intermodal capabilities at airports is a significant barrier to developing air-rail connections. For example, half of the experts (20 of 40) rated integration of air-rail connections into an overall, multi-modal transportation plan or strategy as an approach that would greatly facilitate air-rail connectivity in the United States. In addition, officials we interviewed and over half of the experts (23 of 39) said that communication, collaboration, and consensus among stakeholders such as airlines; rail operators; airport management; and local, state, and federal government officials could greatly facilitate air-rail connectivity. Resource availability, including funding, right-of-way, and access to existing infrastructure can greatly affect the development of air-rail connectivity. As previously noted, the costs of linking existing intercity passenger rail infrastructure and airports can be significant, depending in part on the complexity and scope of the project. Slightly over half of the experts (21 of 40) rated the financial cost of a project as greatly hindering project development, while nearly three-fourths (29 of 40) rated availability of funding as greatly facilitating project development. In addition, about two-fifths of the experts (16 of 39) rated the level of funding for intercity passenger rail as a very important factor contributing to differences in air-rail connectivity development and use between the United States and Europe. We found a number of barriers exist to securing funding for air-rail connectivity projects. For example, transportation officials and stakeholders we interviewed told us that the limitations on use of funds from federal grants and airport revenue collected from passenger facility charges are significant barriers. Furthermore, as noted previously in this report, the federal government does not provide funding dedicated to the development or operation of air-rail connections. If the trend of decreasing federal transportation funding over the past three decades continues, air-rail project sponsors may need to increasingly rely on state funds for air-rail connection projects. In addition, our prior work also identified challenges of funding intercity passenger rail projects. The federal government has recently begun to pursue investment in high- speed passenger rail through the FRA’s HSIPR grant program, and to date has obligated about $9.9 billion for 150 high-speed and intercity passenger rail projects from funds appropriated in fiscal years 2009 and 2010—with more than one-third of the amount obligated designated for the high-speed rail project in California. While this funding will allow many projects to begin construction, it is not sufficient to complete them. Furthermore, Congress has not appropriated any funding for the HSIPR program since fiscal year 2010. The availability of other resources can also greatly affect the development of air-rail connectivity projects. Three-fifths of the experts (24 of 40) rated the lack of availability of land or physical space for direct air-rail projects, including the lack of existing intercity passenger rail infrastructure (e.g., tracks and stations) and rights of way, as factors that greatly influence the development of air-rail connections. Passenger demand for air-rail connectivity has a significant role in developing and using such connections. Approximately half of the experts rated passenger volume and demand as a factor that can either greatly facilitate (if sufficient) (21 of 39) or hinder (if lacking) (20 of 40) air-rail connectivity projects. However, as mentioned previously in this report, there is limited data on the demand for intercity passenger rail. Furthermore, it is often difficult to estimate ridership demand. As we have previously reported, limited data and information, especially early in a project before specific service characteristics are known, make developing reliable ridership demand forecasts difficult. Research on ridership forecasts for rail infrastructure projects around the world have shown that ridership forecasts are often overestimated. Furthermore, there are no industry standard or established criteria for developing or evaluating intercity passenger and high-speed rail ridership forecasts. Over three-quarters of the experts (31 of 40) rated close proximity between the airport terminals and rail stations as greatly facilitating air-rail connectivity. Connections that are easy to use and provide direct connection between the airport terminal and the rail station can greatly affect the development of air-rail connectivity. Officials we interviewed noted that air-rail connections should be designed to meet the needs of airport and intercity passenger users. Accordingly, they underscored that connections should be designed to make the experience as easy and seamless as possible for the traveler. Similarly, over half of the experts (21 of 39) rated the availability of information, including signage, about a connection as greatly facilitating air-rail connectivity. We found 20 of the 60 major airports in the contiguous United States included information about Amtrak on their respective websites, and 14 of the 20 airports provided specific instructions on how passengers could connect to or from Amtrak. Nearly two-thirds of the experts (26 of 40) and many of the stakeholders at our site visits cited frequency and reliability of rail service as factors that greatly influence air-rail connectivity. Stakeholders we interviewed noted that for the air-rail connection to be viable, the passenger rail operator needs to provide frequent service to multiple locations beyond the airport. The frequency of Amtrak service is highly variable across the nation. Similarly, a number of stakeholders we spoke with noted that the reliability of Amtrak service, specifically its on-time performance, affects the use of intercity passenger rail for travel, both between cities and to and from the airport. In addition, over half of the experts (25 of 40) rated the availability of high-speed intercity passenger rail service to connect to an airport as greatly facilitating an air-rail connectivity project. However, representatives from three of the four airlines we interviewed viewed high- speed rail as a potential competitor in diverting passengers away from, as opposed to feeding into, the airport. Experts participating in our survey suggested five key areas where implementing strategies could help improve air-rail connectivity: vision, coordinated planning, funding, infrastructure, and awareness and marketing of connections. We asked these experts to identify potential strategies, and then rate these strategies in terms of both their importance and their feasibility. Some of the strategies that experts rated as more important were also seen as less feasible. (See table 4.) In discussing these strategies with other stakeholders and reviewing academic studies, we found that a number of strategies were inter- related. For example, some of the strategies that experts suggested to improve connectivity, such as increasing connections with other transportation modes, could be related to the implementation of other strategies, such as providing additional funding for air-rail connections. Experts stated additional study of the demand for air-rail connectivity, as well as lessons learned in other countries, could help Amtrak and DOT clarify needs and develop priorities within their existing goals related to enhancing connectivity. Connectivity across modes has been emphasized broadly by DOT and Amtrak, though there has been limited emphasis placed by either for connectivity between airports and intercity passenger rail. For example, in its 2012-2016 strategic plan, DOT’s goal of encouraging livable communities emphasizes connectivity across modes, and identifies connectivity between intercity passenger rail and transit and continued investment in the intercity passenger rail network as means to achieve that goal. DOT’s strategic plan also notes that DOT will continue to work with Amtrak, states, freight railroads, airports, and other key stakeholders to ensure intercity passenger rail is effectively integrated into the national transportation system, though the department has not established any specific goals for air-rail connectivity. Similarly, DOT’s most recent update to its national rail plan, published in September 2010, encourages the integration of policies and investments across modes, including air transportation, to provide convenient options for accessing the passenger rail network, but does not establish specific goals or timelines for increasing air-rail connectivity. Amtrak’s strategic plan has set a goal of connecting to three additional airports in the Northeast Corridor by 2015 as part of its efforts to increase intercity passenger rail connectivity with other travel modes in key markets, but Amtrak officials we spoke with stated that they do not believe Amtrak will achieve this goal because of limited available funding for intercity passenger rail. Should DOT, Amtrak, or Congress choose to develop a more comprehensive approach to air-rail connectivity, experts we surveyed identified further study of passenger preferences and demand as one of the most important and most feasible steps policymakers could take to improve air-rail connections. For example, half of the experts (20 of 40) rated additional study of ridership preferences across all modes as very important to informing the federal government’s air-rail strategy. As previously noted, limited data on passenger preferences and demand for air-rail connectivity exists. For example, one expert emphasized that because passenger demand for air-rail connectivity varies across the country, additional study of passenger preferences at the local level could help identify approaches tailored to the specific needs of the area, noting that there is no “one size fits all” approach to air-rail connectivity. Furthermore, 24 of 40 experts rated studying lessons learned and policy responses from other countries as “very important” toward improving understanding of air-rail connectivity issues, though as previously discussed, air-rail connectivity approaches vary widely outside the United States. Experts in our survey and stakeholders at seven of our eight site visits highlighted the importance of coordinated transportation planning between airports and intercity passenger rail, which could help stakeholders develop multimodal solutions and facilitate problem solving. Amtrak officials noted that if airports, Amtrak, and other transportation stakeholders begin to plan for integration early, the costs of connecting air and rail transportation become part of a larger intermodal strategy and can provide benefits. Accordingly, both Amtrak officials and experts highlighted the importance of planning an intercity passenger rail connection as part of an overall ground access strategy. For example, 17 of 40 experts rated planning air-rail connections to the airport during the initial establishment of intercity passenger rail service as very important. Amtrak officials noted that planning for intercity rail connections at airports during the initial development of the airport can help minimize the incremental cost of making a connection while providing substantial benefits from air-rail connectivity. However, in many locations, particularly in the Northeast Corridor, the rail network was developed decades before the airport. In addition, such an approach may not be feasible, as federal funding and oversight is segmented by mode, a segmentation that can lead to competition, rather than collaboration for funding. Furthermore, collaboration across stakeholder groups can be a time-intensive process and may not necessarily change the willingness of stakeholders to collaborate. Experts we surveyed and stakeholders at six of our eight site visits we interviewed highlighted the importance of securing funding for air-rail connectivity projects. Because of the often substantial cost of the physical infrastructure to support air-rail connections, stakeholders at four of our eight site visits noted that the federal government may have to provide most of the funding to make development possible. Over half of the experts in our survey (22 of 41) as well as other stakeholders at five of our eight site visits suggested that dedicated funding for air-rail connections could help increase the number of connections between airports and intercity passenger rail. Alternatively, nearly half (17 of 41) of the experts in our survey suggested that increased funding for intercity passenger rail is a very important strategy related to increasing Amtrak’s ability to connect to airports. However, the current fiscal environment presents challenges to increasing federal funding for discretionary programs though some existing grant and loan programs—such as the HSIPR, Transportation Investment Generating Economic Recovery (TIGER), and Transportation Infrastructure Finance and Innovation Act of 1998 (TIFIA) programs—have some flexibility to fund air-rail connections if such a connection is a state or local priority. As previously noted, additional funding for air-rail connections could require tradeoffs with other transportation projects. With limited existing funds available for air- rail projects, two stakeholders we interviewed suggested that the federal government should focus on a few air-rail projects of national significance, rather than a number of smaller projects throughout the entire nation. Similarly, one stakeholder suggested that the federal government provide money for a few projects to demonstrate the potential benefits of air-rail connectivity, before moving forward on a nationwide program. Stakeholders at four of our eight site visits also suggested that providing additional flexibility in permitted expenditures among existing federal programs could help improve airport connectivity via rail. In particular, they suggested changes to the airport passenger facility charge authority as well as to the AIP grant program. Among the funding strategies evaluated in our expert survey, experts generally rated the strategy of relaxing the restrictions on passenger facility charges among the most feasible strategies. Airport operators may currently use funds collected from air passengers through passenger facility charges to fund rail access at airports, if the project is owned by the airport, located on airport property, and used exclusively by airport passengers and employees. However, easing these restrictions on use of passenger facility charges faces obstacles. Specifically, use of passenger facility charge revenues is limited by law to airport-related projects. Such a change would require legislative action by the Congress, and changes to the passenger facility charges program have been opposed by the airline industry. For example, representatives from one airline we spoke with stated that the airline was fundamentally opposed to using funds collected through passenger facility charges to pay for airport and intercity passenger rail connections because, in their view, the federal government should not tax airline passengers to fund other transportation modes. Stakeholders at three of the eight airports we spoke with suggested that Congress could allow additional flexibility in the use of funds from transportation grant programs, including the AIP program, which is funded through a variety of aviation excise taxes. While AIP grants may currently be used to fund projects promoting air-rail connectivity on the airport property, like the passenger facility charges, program funds may only be used to fund airport-related projects. Again, however, airlines we spoke with opposed easing existing limitations on the use of AIP grants for airport projects that may benefit non-aviation passengers, and any change to the AIP program to broaden the use of these grants would require congressional action. Furthermore, as previously noted, the commitment of financial resources for air-rail projects may also impose opportunity costs as a result of canceling or delaying other projects or initiatives that could be funded by these federal programs. Experts in our survey suggested that increasing the size and operation of the existing intercity passenger rail network could help encourage the development and use of intercity passenger rail to access airports. Specifically, 23 of 39 experts cited the size and the extent of the intercity passenger rail network as a very important factor resulting in differences between air-rail connections in the United States and Europe. Accordingly, over two-thirds of the experts in our survey (27 of 40) suggested that developing rail connections to transit and other forms of public transportation could help encourage the use of rail to the airport, and over half of the experts (22 of 40) stated that additional connections to city centers and urban attractions are very important strategies to consider. DOT has taken some steps to increase the intercity passenger rail network, most notably through the HSIPR grant program, which, FRA officials noted, placed emphasis on using funds available for intercity passenger rail infrastructure to establish and enhance connections between major metropolitan areas. Additionally, stakeholders we interviewed at six of our eight sites noted that increasing the frequency of intercity passenger service in existing corridors could encourage greater use of rail to connect to the airport. For example, one stakeholder noted that passengers are much less likely to use rail if departure times are hours apart, as opposed to minutes. However, even in corridors that have existing intercity passenger rail service, increasing the frequency of service can be challenging due to both the cost and, as previously discussed, the shared usage of the infrastructure with the freight railroads. Furthermore, as discussed previously, stakeholders we spoke with stated that there is limited demand for public transportation options to connect to the airport, and thus it is unclear whether increasing the frequency of service will increase passenger use of intercity rail service to connect to airports. While building the infrastructure to support new air-rail connections can be expensive and time-intensive, our work identified a few low cost options that could help increase passenger awareness, and thus usage, of existing air-rail connections. For example, Amtrak station operators and airport officials could take steps to increase awareness of existing connections between the two modes, using additional or more prominently placed signage and information kiosks. For example, at the BWI Airport Amtrak Station, signs and information direct customers exiting the station platform to the bus shuttle service connecting the two modes. (See fig. 3.) Similarly, in Burbank, officials stated that the use of signage highlighting the walking path between the Burbank rail station and the airport has helped, in part, to make the connection between the two modes easier for passengers to use. These officials also noted that even with signage, an air-rail connection often required frequent and reliable service from an intercity passenger rail operator. As another option, Amtrak could highlight the connections to the airport from each station on its website, thus providing an additional source of information to travelers beyond what is available at the airport or rail station. We provided a draft of this product to DOT and Amtrak for comment. DOT and Amtrak provided technical comments on the draft, which we incorporated as appropriate. DOT and Amtrak did not have any comments on the e-supplement. We are sending copies of this report to the Secretary of Transportation, the President of Amtrak, and the appropriate congressional committees. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. This report addressed the following objectives: (1) the nature and scope of existing air-rail connectivity in the United States; (2) the benefits and costs of developing air-rail connectivity; (3) the factors that facilitate and hinder the development and use of air-rail connectivity; and (4) potential strategies, including lessons learned from other countries, that may help inform deliberations regarding air-rail connectivity policy. This report focused on air-rail connections between an airport terminal and an intercity passenger rail station. In other contexts, an air-rail connection may refer to a connection between an airport terminal and an intracity rail station that serves other forms of local rail, such as commuter rail or a subway system. To address our objectives, we obtained and analyzed information from a variety of sources. We reviewed and synthesized information from our body of work and relevant academic literature on intermodal transportation, air-rail connectivity, and air-rail code share agreements in the United States and internationally. We reviewed citations identified through a search of databases containing peer-reviewed articles, government reports, and “gray literature,” including Transport Research International Documentation, Social SciSearch, and WorldCat. Publications were limited to the years 2004 through 2012. After an initial review of citations, 48 articles were selected for further review. To collect information on the articles, we developed a data collection instrument to gather information on the articles’ scope and purpose, methods, findings, and their limitations, and additional areas for follow-up, including a review of the bibliography to determine the completeness of our literature search. To apply this data collection instrument, one analyst reviewed each article and recorded information in the data collection instrument. A second analyst then reviewed each completed data collection instrument to verify the accuracy of the information recorded. We summarized the findings and limitations of the articles based on the completed data collection instruments, as well as areas for additional research identified in the articles. In addition, we also reviewed federal laws related to air and intercity passenger transportation and strategic plans from Amtrak and the Department of Transportation (DOT). We interviewed officials from DOT and Amtrak, transportation experts, and representatives from U.S. airlines and industry associations to obtain their perspectives on air-rail connectivity issues. We reviewed completed, ongoing, and future air-rail connectivity efforts at eight airports in the United States, and interviewed a variety of stakeholders at each site, including airport authorities, state and local transportation agencies, local transportation planning organizations, and air and rail industry associations. (See table 5.) These airports were selected to include airports that have recently planned, constructed, or completed an air-rail project and are dispersed in various regions of the country. Our findings at these sites were selected as part of a judgmental, non-probability sample of air-rail connectivity efforts at airports, and cannot be generalized to all airports. We also analyzed Amtrak’s distance and connectivity to the 28 large and 32 medium hub airports located in the contiguous United States based on the 2011 Federal Aviation Administration’s Air Carrier Activity Information System database. We limited our analysis to these 60 airports because they accounted for approximately 86 percent of U.S. passenger enplanements for calendar year 2011. We determined the linear distance for each of the 60 airports and the nearest Amtrak station based on information from the Bureau of Transportation Statistics and the National Transportation Atlas Database for 2012. Based on the use of both as widely accepted federal statistical data sources, we determined these data to be generally reliable for our purpose, which was to provide context on existing air-rail connectivity. Linear distance is the distance measured between two points using their latitude and longitude. This may understate the distance a passenger may have to travel because it does not account for actual travel routes (e.g., a route that crosses a bridge or avoids buildings or other obstacles along the passenger’s route). The actual distance that a passenger may travel also depends on the selected transportation mode, local roads, or route selected. We used the linear distance calculations to determine the number of airports with an Amtrak station within 5, 10, 20, and over 20 miles. (See app. IV.) To determine the modal connectivity between airport and Amtrak stations, we systematically reviewed the airport websites’ ground transportation page and Amtrak System Timetable for Winter/Spring 2013 for information on how passengers can access Amtrak to and from the airports. To obtain additional insight on issues related to air-rail connectivity, we collaborated with the National Academy of Sciences to identify 25 experts from the aviation and rail industries, Amtrak, state and local governments, academia, and the private sector. These experts were selected based on their knowledge of one or more of the following topic areas: intermodalism, airlines and the air travel industry, airport operations, the rail industry, and passenger travel. We identified 17 additional experts in these fields through a review of academic literature, our previous work, and interviews with stakeholders. (See app. II for a list of these experts.) We conducted a web-based survey in which we asked these 42 experts for their views on the benefits of air-rail connectivity, factors that facilitate and hinder the development and use of air-rail connectivity, differences between air-rail connectivity in the United States and Europe, and strategies that could improve air-rail connectivity. We employed a modified version of the Delphi method to organize and gather these experts’ opinions. Experts were sent an email invitation to complete the survey on a GAO web server using a unique username and password. The survey was conducted in two stages. The first stage of the survey— which ran from January 16, 2013, to February 19, 2013—asked the experts to respond to five open-ended questions about various aspects of air-rail connectivity based on our study objectives. To encourage participation by our experts, we stated that responses would not be individually identifiable and that results would generally be provided in summary form. We received a 95 percent (40 of 42) response rate for the first stage of the survey. After the experts completed the open-ended questions, we performed a content analysis of the responses to identify the most important issues raised by our experts. Two members of our team independently categorized experts’ responses to each of the questions. Any disagreements were discussed until consensus was reached. We analyzed the responses provided by the experts and developed close-ended questions for the second stage of the survey where we asked each expert to evaluate the ideas and other information that came from the first part of the survey. Because this was not a sample survey, it had no sampling errors. However, the practical difficulties of conducting any survey can introduce non-sampling errors, such as difficulties interpreting a particular question, which can introduce unwanted variability into the survey results. We took steps to minimize non-sampling errors by pre-testing the questionnaire with 5 experts. We conducted pretests to help ensure that the questions were clear and unbiased, and that the questionnaire did not place an undue burden on respondents. An independent reviewer within GAO also reviewed a draft of the questionnaire prior to its administration. We made appropriate revisions to the content and format of the second survey questionnaire based on the pretests and independent review. The second stage of the survey was administered on the Internet from March 25, 2013, to May 15, 2013. To increase the response rate, we followed up with emails and personal phone calls to the experts to encourage participation in our survey. We received responses from 41 of 42 experts, resulting in a 98 percent response rate. The information and perspectives that we obtained from the expert survey may not be generalized to all experts that have an interest or knowledge of air-rail connectivity issues. The full survey and responses are available at GAO-13-692SP. We provided a draft of this report to Matthew A. Coogan, director of the New England Transportation Institute for review and comment, based on his expertise on air-rail connectivity issues similar to those in our report. Mr. Coogan was selected based on his extensive past and on-going research on similar topics related to air-rail connectivity issues in the United States. He provided technical comments, which we incorporated as appropriate. We conducted this performance audit from August 2012 to August 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Affiliation Resource Systems Group, Inc. LeighFisher, Inc. Appendix III: Examples of Potential Federal Financing and Funding Sources for Air-Rail Projects Description Provides grants to airports for planning and developing projects through the Federal Aviation Administration (FAA). The program is funded, in part, by aviation user excise taxes, which are deposited into the Airport and Airway Trust Fund. In terms of promoting air-rail connections, these funds may be used for projects that are on airport property or right-of-way owned or controlled by the airport, airport owned, and exclusively serves airport traffic. In fiscal year 2013, this program was funded at $3.1 billion. For fiscal year 2011, $400 million in unobligated funds were rescinded. Authorizes commercial service airports to charge airline passengers a boarding charge of up to $4.50 to be collected by the airlines, after obtaining FAA approval. The fees are used by the airports to fund FAA approved projects that are on airport property, airport-owned, and exclusively serve airport traffic. These projects must enhance the safety, security, or capacity of air travel; reduce the impact of aviation noise; or increase air carrier competition. In calendar year 2012, $2.8 billion in fees were collected under this program. Example of use for air-rail projects GAO found no example of its use for air-rail projects. Provides direct loans and loan guarantees to railroads, state and local governments and Amtrak, among other entities, to finance the development of railroad infrastructure, including the development of new intermodal or railroad facilities. The program, administered by FRA, is authorized to provide up to $35 billion in loans or loan guarantees for eligible projects. GAO found no example of its use for air-rail projects. Provides discretionary grants through DOT, awarded on a competitive basis, to fund merit-based transportation projects expected to have a significant impact on the nation, a metropolitan area, or a region. Each project is multi-modal, multi-jurisdictional, or otherwise challenging to fund through existing programs. Eligible projects include capital investments in roads, highways, bridges, or transit; passenger and freight rail; and port infrastructure; as well as bicycle and pedestrian- related improvements. In fiscal year 2013, this program was funded at $474 million. GAO found no example of its use for air-rail projects. Description Provides federal credit assistance for surface transportation projects jointly through the Federal Highway Administration, Federal Transit Administration, and FRA. Project sponsors may include public, private, state, or local entities. Projects eligible for credit assistance include intercity passenger rail facilities and vehicles, such as those owned by Amtrak, as well as projects otherwise eligible for federal assistance through existing surface transportation programs. In fiscal year 2013, this program was funded at $750 million. Example of use for air-rail projects Miami Intermodal Center at Miami International Airport In fiscal year 2013, approximately $3.4 billion was made available for obligation for the AIP program. On May 1, 2013, the Reducing Flight Delays Act of 2013 was enacted. It authorized the Secretary of Transportation to transfer an amount, not to exceed $253 million, from the AIP program to the FAA operations account that the Secretary of Transportation determines to be necessary to prevent reduced operations and staffing of the FAA during fiscal year 2013. Pub. L. No. 113-9, 127 Stat. 443. 23 U.S.C. §§ 601-609. More than 5 miles to 10 miles (21 airports) In addition to the contact listed above, Teresa Spisak (Assistant Director), Matt Voit, Rosa Leung, Paul Aussendorf, Leia Dickerson, Patrick Dudley, Lorraine Ettaro, Jessica Evans, Kathleen Gilhooly, Delwen Jones, Richard Jorgenson, Jill Lacey, John Mingus, and Josh Ormond made major contributions to this product.
Increasing passenger travel has led to growing congestion in the nation's air transportation system, and projections suggest that this trend is likely to continue. The integration of air and intercity passenger rail service, which is provided in the United States by Amtrak, has been suggested by some transportation experts as a strategy to increase mobility and reduce congestion in the United States. The FAA Modernization and Reform Act of 2012 mandated that GAO review issues related to air-rail connectivity. This report discusses (1) the nature and scope of air-rail connectivity, (2) the benefits and costs of air-rail connectivity, (3) factors affecting the development and use of air-rail connectivity, and (4) potential strategies to improve air-rail connectivity. GAO reviewed laws, strategic plans, and academic studies. GAO analyzed data to determine distances between Amtrak stations and large and medium hub airports and interviewed officials from DOT, and representatives from Amtrak, the airlines, and aviation and rail industry associations. GAO interviewed stakeholders at eight large and medium hub airports, which were selected based on geographic location and extent of connectivity with Amtrak. In addition, GAO surveyed experts from the aviation industry, rail industry, state and local governments, academia and the private sector about air-rail connectivity issues. The survey and results can be found at GAO-13-692SP . GAO is not making recommendations in this report. DOT and Amtrak provided technical comments, which were incorporated as appropriate. Most major U.S. airports have some degree of physical proximity to intercity passenger rail stations, though only 2 airports are currently collocated with intercity rail stations. Specifically, 42 of the nation's 60 large and medium hub airports are located within 10 miles of Amtrak stations; 21 of the 42 airports are within 5 miles of Amtrak stations. At the 2 collocated airports, passengers can access Amtrak either via an automated people mover (Newark Liberty International Airport) or by walking (Bob Hope Burbank Airport). At some airports, such as Baltimore/Washington International Thurgood Marshall Airport, passengers can take a direct shuttle between the airport and the nearby Amtrak station, while at other airports, connections to Amtrak can be made through other modes of transportation. Studies and data, while limited, suggest that relatively few passengers in the United States use intercity rail to travel to and from the airport or through more integrated travel such as code-sharing agreements, whereby airlines sell tickets for Amtrak's service. The only existing air-rail code-sharing agreement in the United States is at Newark Airport. Amtrak and states are considering projects to expand intercity rail connectivity with airports, including as part of the construction of high-speed rail in California. Air-rail connectivity may provide a range of mobility, economic, and environmental benefits, though the financial costs of building these connections could be substantial. Specifically, based on discussions with industry stakeholders, input from surveyed experts, and a review of academic literature, GAO found a general consensus that air-rail connectivity can provide a range of mobility benefits for travelers, though less agreement existed on the importance and extent of economic and environmental benefits. However, achieving these benefits could require significant trade-offs, because the costs of expanding the existing intercity passenger rail network and constructing viable connections can be significant. Given these costs, based on GAO's work, there are currently limited locations where benefits are high enough to justify funding to improve air-rail connectivity. Air-rail connectivity remains limited in the United States, according to experts, as a result of institutional and financial factors, among other things. In particular, the limited nature of the existing intercity passenger rail network, including the frequency of service and connectivity to other transportation modes, remains an obstacle to developing and using air-rail connections. Securing funding for air-rail projects also remains a barrier. While funds from some federal grant programs can be used to help facilitate air-rail connections, there is no single funding source for air-rail projects. There are strategies to improve air-rail connectivity, but adopting them involves trade-offs. Experts generally focused on, among other things, leadership, funding, and infrastructure improvements, though the effectiveness of these strategies may depend on a project's local characteristics. There has been little emphasis on air-rail connectivity by either the Department of Transportation (DOT) or Amtrak. Furthermore, experts noted that some of the strategies could be particularly challenging or costly to implement, such as in locations where the rail network was developed decades before airports. For example, increasing intercity passenger rail's frequency could improve air-rail connectivity but could also be expensive.
As a comprehensive health benefit program for vulnerable populations, each state Medicaid program, by law, must cover certain categories of individuals and provide a broad array of benefits. Within these requirements, however, the Medicaid program allows for significant flexibility for states to design and implement their programs, resulting in more than 50 distinct state-based programs. These variations in design have implications for program eligibility and services offered, as well as how expenditures are reported and services are delivered. Specifically, in administering their own programs, states make decisions regarding populations or health services to cover beyond what are mandated by law. States must cover certain groups of individuals, such as pregnant women with incomes at or below 133 percent of the federal poverty level (FPL), but may elect to cover them above this required minimum income level. For example, as of March 2011, some states covered pregnant women with incomes at or above 250 percent of the FPL. Similarly, while states’ Medicaid programs generally must cover certain mandatory services—including inpatient and outpatient hospital services, physician services, laboratory and X-ray services, and nursing facility services for those age 21 and older—states may also elect to cover additional optional benefits and services. These optional benefits and services include prescription drugs, dental care, hospice care, home- and community-based services, and rehabilitative services. In addition, even among states that offer a particular benefit, the breadth of coverage (i.e., amount, duration, and scope) of that benefit can vary greatly. For example, most states cover some dental services, but some limit this benefit to trauma care and/or emergency treatment for pain relief and infection, while others also cover annual dental exams. States also have flexibility, within general federal requirements, to determine how the services they cover will be delivered to Medicaid enrollees—whether on a fee-for-service basis or through managed care arrangements. For example, under some managed care arrangements, the state pays managed care organizations a fixed amount, known as a capitation payment, to provide a package of services. States vary in terms of the types of managed care arrangements used and the eligibility groups enrolled. For example, while 12 states enrolled 50 percent or more of their disabled enrollees in comprehensive risk-based managed care in fiscal year 2011, 20 states enrolled fewer than 5 percent of disabled enrollees in such arrangements. States may also operate premium assistance programs to subsidize the purchase of private health insurance—such as employer-sponsored insurance—for Medicaid enrollees. In 2009, 35 states reported using Medicaid funds to provide premium assistance. These differences in covered services and delivery systems can affect the distribution of states’ spending across categories of services. For example, states that rely heavily on managed care arrangements to provide hospital care and acute care services to their enrollees are likely to have a greater proportion of their expenditures devoted to managed care, and a lower proportion to the covered services, than states that do not have such managed care arrangements. A small percentage of Medicaid-only enrollees consistently accounted for a large percentage of total Medicaid expenditures for Medicaid-only enrollees. As shown in figure 1, there was little variation across the years we examined. In each fiscal year from 2009 through 2011, the most expensive 1 percent of Medicaid-only enrollees in the nation accounted for about one-quarter of the expenditures for Medicaid-only enrollees; the most expensive 5 percent accounted for almost half of the expenditures; the most expensive 25 percent accounted for more than three- quarters of the expenditures; in contrast, the least expensive 50 percent accounted for less than 8 percent of the expenditures; and about 12 percent of enrollees had no expenditures. These findings regarding Medicaid-only enrollees are similar to those that others have reported for all Medicaid enrollees, as well as for Medicare and personal healthcare spending in the United States. A Kaiser Family Foundation report found that in fiscal year 2001, the most expensive 1.1 percent of all Medicaid enrollees—including those dually eligible for Medicare—accounted for more than one-quarter of Medicaid expenditures, and the most expensive 3.6 percent accounted for nearly half. The Congressional Budget Office reported that in 2001, the most expensive 5 percent of Medicare enrollees in fee-for-service plans accounted for 43 percent of Medicare expenditures, and the most expensive 25 percent accounted for 85 percent. The National Institute for Health Care Management reported that in 2009, the most expensive 1 percent of the overall civilian U.S. population living in the community accounted for more than 20 percent of personal health care spending, with the most expensive 5 percent accounting for nearly half. We also found that in each state, a similarly small percentage of high- expenditure Medicaid–only enrollees was responsible for a disproportionately large share of expenditures for Medicaid-only enrollees, although the magnitude of this effect varied widely across states. For example, the percentage of expenditures for the most expensive 5 percent of Medicaid-only enrollees ranged from 28.8 percent in Tennessee to 63.2 percent in California. For additional state-by-state information about the distribution of expenditures among Medicaid-only enrollees in fiscal year 2011, see appendix II. The proportions of high-expenditure Medicaid-only enrollees in different eligibility groups were consistent from fiscal year 2009 through 2011, as shown in figure 2.enrollees were disabled (less than 10 percent), disabled enrollees were disproportionately represented in the high-expenditure group, consistently constituting about 64 percent of those with the highest expenditures. Conversely, although children were the largest group of Medicaid-only enrollees (about 50 percent), they consistently constituted about 16 percent of the high-expenditure group. The distribution of high-expenditure Medicaid-only enrollees’ expenditures among selected categories of service in fiscal year 2011 varied widely across states. As noted above, managed care arrangements can affect the distribution of expenditures for covered services. For some states, such as Tennessee and Hawaii, a high percentage of expenditures were for managed care or premium assistance, and correspondingly low percentages were for expenditures such as hospital care or acute care services. For other states, such as Idaho and Oklahoma, a low percentage of expenditures were for managed care or premium assistance, and correspondingly higher percentages were for hospital care or acute care services. States’ reliance on managed care plans to provide certain services limits what can be learned from the MSIS summary data regarding the services received by enrollees, because the data show the per-enrollee payments made by state Medicaid programs to plans, not the payments the plans made to providers for the services for which the plans are responsible. In a state such as Tennessee, for example, in which all Medicaid enrollees are in managed care plans that are responsible for providing hospital care and a broad array of acute care services, the state’s low percentages of expenditures in those service categories reflect the delivery system structure of the state Medicaid program, not enrollees’ utilization of services. The greatest variation among states in their expenditures for specific service categories was for managed care and premium assistance. As shown in figure 3, four states reported that 0 percent of their expenditures were for managed care or premium assistance. For states that did report expenditures in this category, the percentage ranged from less than 1 percent to 75 percent. Nationwide, about 15 percent of expenditures for high-expenditure Medicaid-only enrollees were in this category. The variation among states in the percentages of expenditures in this service category reflects the wide variation among states in their reliance on managed care arrangements to provide services to enrollees, and particularly disabled enrollees, who constituted almost two-thirds of high- expenditure Medicaid-only enrollees. In the five states with the highest percentage of expenditures for managed care and premium assistance, the percentage of disabled enrollees in comprehensive risk-based managed care plans ranged from 44 percent in New Mexico to more than 90 percent in Hawaii and Tennessee, compared with 0 percent in the five states with the lowest percentages of expenditures in this service category. States also varied widely—from 0 to about 45 percent—in the percentages of high-expenditure Medicaid-only enrollees’ expenditures for hospital care (inpatient and outpatient). About 27 percent of nationwide expenditures for high-expenditure Medicaid-only enrollees were in this category. (See fig. 4.) Similarly, states varied widely—from nearly 0 to about 45 percent—in the percentages of high-expenditure Medicaid-only enrollees’ expenditures that were for non-institutional support services other than acute or long- term support services. These other support services include hospice benefits, private duty nursing, rehabilitative services, and targeted case management. About 17 percent of nationwide expenditures were for enrollees in this category. (See fig. 5.) States also varied in the percentages of high-expenditure Medicaid-only enrollees’ expenditures in other categories, if not as widely. States varied least—from 0 to 11 percent—in the percentage of expenditures for high- expenditure Medicaid-only enrollees that were for psychiatric facility care, which accounted for about 2 percent of nationwide expenditures for high-expenditure Medicaid-only enrollees. The percentage of a state’s expenditures for high-expenditure Medicaid-only enrollees varied in other categories from 0 to 33 percent for acute care services,11 percent of nationwide expenditures; 0 to 25 percent for prescription drugs, which accounted for 14 percent 0 to about 23 percent for long-term non-institutional support services,expenditures; and which accounted for about 6 percent of nationwide 0 to 22 percent for long-term institutional care, 9 percent of nationwide expenditures. Long-term institutional care includes nursing facilities and intermediate care facilities for individuals with intellectual disabilities. See GAO, Medicaid: Assessment of Variation among States in Per-Enrollee Spending, GAO-14-456 (Washington, D.C.: June 16, 2014), and GAO, Medicaid: Alternative Measures Could Be Used to Allocate Funding More Equitably, GAO-13-434 (Washington, D.C.: May 10, 2013). percentage of expenditures reported in the MSIS summary file that was attributable to prescription drugs was lower on average in states that included some or all drugs in the package of services provided by managed care plans than in states that paid for all drugs on a fee-for- service basis, and the three states in which the share of expenditures that went to drugs was lowest—Arizona, Hawaii, and New Mexico—included all drugs in their managed care packages. States vary widely in the distribution of their expenditures among service categories; for state-by-state information about the percentage of high- expenditure Medicaid-only enrollees’ expenditures for selected categories of services in fiscal year 2011, see appendix V. HHS reviewed a draft of this report and provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of HHS and other interested parties. The report also will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. This appendix describes the methodology for addressing our three objectives regarding high-expenditure Medicaid enrollees who are not also enrolled in Medicare, that is, Medicaid-only enrollees. These objectives were to: (1) examine the distribution of expenditures among Medicaid-only enrollees, (2) determine whether the proportions of high- expenditure Medicaid-only enrollees in selected categories changed or remained consistent from year to year, and (3) determine whether the distribution of high-expenditure Medicaid-only enrollees’ expenditures among selected categories of service varied across states. We analyzed data from the Medicaid Statistical Information System (MSIS) Annual Person Summary File. This summary file consolidates individual enrollees’ claims for a single fiscal year, including data on their enrollment and expenditures. The file includes enrollee-specific information regarding enrollment categories, expenditures, dual eligibility status, age, gender, payment arrangements—including fee-for-service payments and capitated payments made to managed care organizations—and indicators for five chronic conditions and two service categories. The five chronic condition indicators are for asthma, diabetes, human immunodeficiency virus/acquired immunodeficiency syndrome (HIV/AIDS), mental health conditions, and substance abuse. The two service category indicators are for delivery or childbirth (which may include costs attributed to a mother during delivery or the child soon after birth) and long-term care residence. The summary file does not provide information on other conditions that may affect enrollees’ expenditures. We used data from fiscal years 2009, 2010, and 2011—the most recent years for which data from almost all states were available. As of December 2014, the summary file did not include expenditure or enrollment data from Maine for fiscal year 2011. We made several changes to limit our analyses to Medicaid-only enrollees and ensure that the data were sufficiently reliable for our purposes. For example, because our objectives focused on Medicaid-only enrollees, we excluded those who were dually eligible for both Medicaid and Medicare. Specifically, we made the following adjustments to the data: If an individual’s enrollment category was listed as child, adult, or aged, and the recorded age or other information was inconsistent with that category, we re-defined the enrollment category as unknown. We reset all negative expenditures (which can indicate adjustments to expenditures recorded in prior years) to 0. Generally, states may report adjustments to their Medicaid expenditures for up to two years. To the extent that negative expenditures reflect adjustments to prior year expenditures, retaining them would result in an underestimate of expenditures for any specific year. records because we could not determine which expenditures for these enrollees were Medicaid expenditures. After making these changes, we retained about 85 percent of the original records in the summary file for each fiscal year, counting the records from all states and the District of Columbia (but not counting records from Maine in 2011, which were unavailable). These records represent just under 65 percent of total Medicaid expenditures in these years. (We previously reported that dual-eligible enrollees—whom we excluded from our analyses—accounted for about 35 percent of total Medicaid expenditures in fiscal year 2009.) As of December 2014, the summary file did not include fiscal year 2011 expenditure data from Florida, and so we excluded Florida from all further analyses of 2011 data. We assessed the reliability of these data by performing appropriate electronic data checks and reviewing relevant documentation, and determined that the data from Idaho for 2010 were not sufficiently reliable for our purposes. We determined that the remaining data were sufficiently reliable for our purposes. Our analyses were thus based on data from all states and the District of Columbia, but excluded Idaho in fiscal year 2010, and excluded Florida and Maine in fiscal year 2011. To determine the distribution of expenditures among Medicaid-only enrollees, we calculated the cumulative frequency distribution of expenditures for enrollees. That is, we placed all Medicaid-only enrollees nationwide in rank order by their total Medicaid expenditures, from highest to lowest, and determined the cumulative percentage of nationwide expenditures for Medicaid-only enrollees attributable to enrollees as the percentage of ordered enrollees increased. We analyzed data from 3 years—fiscal years 2009, 2010, and 2011—separately to determine whether the relationship was similar or different across years. To facilitate interpretation of these frequency distributions, we also computed a mathematical coefficient that provides information about the relationship between the percentage of Medicaid-only enrollees and the percentage of total Medicaid expenditures for these enrollees—the Gini coefficient. This coefficient indicates the degree of inequality, that is, the extent to which the frequency distribution differs from one in which expenditures are equal for all enrollees. Figure 6 illustrates the difference between frequency distributions with differing Gini coefficients. To determine whether the proportions of high-expenditure Medicaid-only enrollees in selected categories changed or remained consistent from year to year, we conducted two separate analyses. For both, we defined high-expenditure Medicaid-only enrollees as the 5 percent with the highest expenditures within each state, as we had in our earlier work on high-expenditure Medicaid enrollees. For one analysis, we examined the percentage of high-expenditure Medicaid-only enrollees in five mutually exclusive eligibility groups (child, adult, aged, disabled, or unknown). For another analysis, we examined the percentage of high-expenditure Medicaid-only enrollees identified as having any one of the five chronic conditions recorded in the summary file (asthma, diabetes, HIV/AIDS, mental health conditions, or substance abuse) or either of the two services (delivery or childbirth, and long-term care residence) recorded in the summary file. Enrollees could have any of these seven conditions or services, any combination of them, or none of them. We compared the proportions of high-expenditure enrollees in each of these sets of categories in fiscal years 2009, 2010, and 2011. To determine whether the distribution of high-expenditure Medicaid-only enrollees’ expenditures among selected categories of service varied across states, we again defined high-expenditure Medicaid-only enrollees as the 5 percent with the highest expenditures within each state and examined expenditures for fiscal year 2011 in eight categories of service. These categories were three types of institutional care— hospital, long-term, and psychiatric facility; three types of non-institutional services—acute care; long-term support; and other support services, such as targeted case management or rehabilitative services; prescription drugs; and managed care and premium assistance. We identified the distribution of expenditures for high-expenditure enrollees among these types of service within each state in fiscal year 2011 and compared the distributions across states. Table 4 provides information about the distribution of expenditures among Medicaid-only enrollees nationally and in each state and the District of Columbia in fiscal year 2011, including the percentages of expenditures for Medicaid-only enrollees that were attributable to the most expensive 1, 5, 10, and 25 percent of these enrollees; the percentage of expenditures for Medicaid-only enrollees that were attributable to the least expensive 50 percent of these enrollees (including those with 0 expenditures); and the Gini coefficient, which indicates the degree of inequality; that is, the extent to which the frequency distribution differs from one in which expenditures are equal for all enrollees. These state-by-state data illustrate that states differ widely in the degree to which their distribution of expenditures varied across enrollees, but in each state, a small percentage of high-expenditure Medicaid–only enrollees was responsible for a disproportionately large share of the expenditures for Medicaid-only enrollees. Table 5 provides information about the percentage of high-expenditure Medicaid-only enrollees in five mutually exclusive eligibility groups (child, adult, aged, disabled, or unknown) nationally and in each state and the District of Columbia in fiscal year 2011. These data indicate that while there was considerable variation across the states, in each state, the greatest percentage of high-expenditure Medicaid-only enrollees were disabled and the lowest percentage in a known eligibility group were aged. Table 6 provides information about the percentage of high-expenditure Medicaid-only enrollees with certain conditions or services nationally and in each state and the District of Columbia in fiscal year 2011. The conditions are five chronic conditions recorded in the Medicaid Statistical Information System Annual Person Summary File—asthma, diabetes, human immunodeficiency virus/acquired immunodeficiency syndrome (HIV/AIDS), mental health conditions, or substance abuse. The services are two services—delivery or childbirth, and long-term care residence— recorded in the summary file. Enrollees could have any of these conditions or services, any combination of them, or none of them. These data indicate considerable variation across states, although the majority of these enrollees in each state except Pennsylvania had at least one of these conditions or services, and within each state, mental health conditions were the most common of these conditions and services. Table 7 provides information about the percentage of high-expenditure Medicaid-only enrollees’ expenditures in different categories of services nationally and in each state and the District of Columbia in fiscal year 2011, and illustrates that states vary widely in the distribution of their expenditures among service categories. These categories were three types of institutional care—hospital, long-term, and psychiatric facility; three types of non-institutional services—acute care; long-term support; and other support services, such as targeted case management or rehabilitative services; prescription drugs; and managed care and premium assistance. Expenditures for categories of service other than managed care and premium assistance do not include payments for those services that were made by managed care plans. As a result, the percentage of expenditures does not necessarily reflect enrollees’ utilization of services. In addition to the contact named above, key contributors to this report were Robert Copeland, Assistant Director; Dee Abasute; Kristen Joan Anderson; Nancy Fasciano; Giselle Hicks; Drew Long; and Jennifer Whitworth.
Studies on healthcare spending generally find that a small percentage of individuals account for a large proportion of expenditures, and Medicaid—a federal-state health financing program for low-income and medically needy individuals—is no exception. Medicaid expenditures for fiscal year 2013 totaled about $460 billion, covering about 72 million enrollees, some of whom were also eligible for Medicare. More information about Medicaid enrollees who are not also eligible for Medicare (i.e., Medicaid-only enrollees) and who account for a high proportion of expenditures could enhance efforts to manage expenditures and facilitate improvements to care. GAO was asked to provide information about the characteristics of high-expenditure Medicaid-only enrollees and their expenditures. GAO (1) examined the distribution of expenditures among Medicaid-only enrollees, (2) determined whether the proportions of high-expenditure Medicaid-only enrollees in selected categories changed or remained consistent from year to year, and (3) determined whether the distribution of high-expenditure Medicaid-only enrollees' expenditures among selected categories of service varied across states. GAO analyzed data from the Medicaid Statistical Information System Annual Person Summary File for fiscal years 2009, 2010, and 2011, the most recent years for which data from almost all states were available. A small percentage of Medicaid-only enrollees—that is, those who were not also eligible for Medicare—consistently accounted for a large percentage of total Medicaid expenditures for Medicaid-only enrollees. In each fiscal year from 2009 through 2011, the most expensive 5 percent of Medicaid-only enrollees accounted for almost half of the expenditures for all Medicaid-only enrollees. In contrast, the least expensive 50 percent of Medicaid-only enrollees accounted for less than 8 percent of the expenditures for these enrollees. Of the Medicaid-only enrollees who were among the 5 percent with the highest expenditures within each state, the nationwide proportions of these enrollees in different eligibility groups (such as the disabled or children) and with certain conditions (such as asthma) or services (such as childbirth or delivery) were also consistent from fiscal years 2009 through 2011. The distribution of high-expenditure Medicaid-only enrollees' expenditures among categories of service in fiscal year 2011 varied widely across states. Expenditures for managed care and premium assistance varied most widely (from 0 to 75 percent). The Department of Health and Human Services provided technical comments on a draft of this report, which were incorporated as appropriate.
RUS, EDA, Reclamation, and the Corps each have distinct missions and fund rural water supply and wastewater projects under separate programs and congressional authorizations. Furthermore, each agency has its own definition of what constitutes a rural area and a unique organizational structure to implement its programs. Specifically, RUS administers the U.S. Department of Agriculture’s rural utilities programs throughout the country, which are aimed at expanding electricity, telecommunications, and water and waste disposal services. RUS provides assistance for water supply and wastewater projects through its Water and Environmental Program and defines rural areas for this program as incorporated cities and towns with a population of 10,000 or fewer and unincorporated areas, regardless of population. RUS manages this program through its headquarters in Washington, D.C., and 47 state offices, each supported by area and local offices. EDA provides development assistance to areas experiencing substantial economic distress regardless of whether or not they are rural or urban. EDA primarily provides assistance for water supply and wastewater projects in distressed areas through its Public Works and Development Facilities Program and uses a U.S. Census Bureau definition for rural areas that is based on metropolitan statistical areas. EDA manages this program through its headquarters in Washington, D.C., six regional offices, and multiple field personnel. Reclamation was established to implement the Reclamation Act of 1902, which authorized the construction of water projects to provide water for irrigation in the arid western states. Reclamation generally manages numerous municipal and industrial projects as part of larger, multipurpose projects that provide irrigation, flood control, power, and recreational opportunities in 17 western states, unless otherwise directed by the Congress. Reclamation provides assistance for water supply projects through individual project authorizations and defines a rural area as a community, or group of communities, each of which has a population of not more than 50,000 inhabitants. Reclamation manages these projects through its headquarters in Washington, D.C., and Denver, Colorado, five regional offices, and multiple field offices in the western United States. The Corps’ Civil Works programs investigate, develop, and maintain water and related environmental resources throughout the country to meet the agency’s navigation, flood control, and ecosystem restoration missions. In addition, the Civil Works programs also provide disaster response, as well as engineering and technical services. The Corps provides assistance for water supply and wastewater projects through authorizations for either a project in a specific location, or for a program in a defined geographic area, and does not have a definition for rural areas. The Corps administers its programs and projects through its Headquarters in Washington, D.C., eight regional divisions, and 38 district offices. These agencies rely on several sources of funding—including annual appropriations from the general fund and from dedicated funding sources, such as trust funds—to provide financial support for these projects and programs. RUS, EDA, Reclamation, and the Corps obligated $4.7 billion to 3,104 rural water supply and wastewater projects from fiscal years 2004 through 2006. Of these obligations, RUS obligated nearly $4.2 billion (or about 90 percent) of the funding—about $1.5 billion in grants and about $2.7 billion in loans—to about 2,800 projects. EDA, Reclamation, and the Corps provided a combined $500 million in grants to rural communities for about 300 water supply and wastewater projects. Table 1 shows the number of projects and the amount of obligations for rural water supply and wastewater projects by agency for fiscal years 2004 through 2006. Figures 1 through 4 show the location of these rural water supply and wastewater projects by agency during fiscal years 2004 through 2006. RUS provided the majority of the funding to the largest number of projects, while Reclamation provided the largest amount of funding per project. As table 1 shows, the average RUS grant was approximately $680,000 per project, while the average Reclamation grant was about $22 million per project. EDA and Corps grants averaged about $1 million and $800,000 per project, respectively. The average Reclamation grant amount was significantly larger than the grant amounts provided by the other agencies because Reclamation provided funding to a relatively small number of large regional water supply projects that span multiple communities. For example, during fiscal years 2004 through 2006, Reclamation obligated nearly $87 million of the about $459 million estimated total cost for the Mni Wiconi project. This project will provide potable water to about 51,000 people in rural communities spanning seven counties and three Indian Reservations. The Mni Wiconi project covers approximately 12,500 square miles of the state of South Dakota or roughly 16 percent of the state’s total land area. Figure 5 shows the location of the Mni Wiconi project area. In contrast, the other three agencies primarily provided funding to relatively smaller scale projects located in single communities. For example, Penns Grove, New Jersey, a community with a population of about 5,000, received an $800,000 EDA grant to upgrade a wastewater treatment plant with an estimated total project cost of $1.16 million. Similarly, according to Corps officials, Monticello, Kentucky, a community with a population of about 6,000, received about $312,500 from the Corps for two sewer line extensions with total project costs of about $435,000. This community also received about $1 million from RUS for water and sewer line upgrades with an estimated total project cost of about $1.4 million. While the types of projects RUS, EDA, Reclamation, and the Corps fund are similar, varying agency eligibility criteria can limit funding to certain communities based on their population size, economic need, or geographic location. Specifically, RUS and EDA have established nationwide programs with standardized eligibility criteria and processes under which communities compete for funding. In contrast, Reclamation and the Corps have historically provided funding to congressionally authorized projects in certain geographic locations, without standardized eligibility criteria. Table 2 shows the types of projects each agency funds, the funding mechanisms they use, and their eligibility criteria. The rural water projects that RUS, EDA, Reclamation, and the Corps fund are similar, and all four agencies use similar funding mechanisms. While Reclamation primarily provides funding for water supply projects, RUS, EDA, and the Corps fund both water supply and wastewater projects. These projects primarily include the construction or upgrading of water or wastewater distribution lines, treatment plants, and pumping stations. For example, all four agencies funded water line expansions or upgrades in either residential or commercial areas. RUS, EDA, and the Corps also funded sewer line extensions into either residential or commercial areas. RUS and EDA have established nationwide programs with standardized eligibility criteria and processes under which communities compete for funding. Specifically, RUS’ eligibility criteria require projects to be located in a city or town with a population of less than 10,000 or an unincorporated rural area, regardless of the area’s population. EDA’s eligibility criteria require projects to be located in economically distressed communities, regardless of the size of the community served, and the project must also create or retain jobs. RUS’ eligibility criteria require water supply or wastewater projects to serve rural areas. A project must be located in a city or town with a population of less than 10,000 or in an unincorporated rural area regardless of the population. For example, St. Gabriel, Louisiana, with a population of about 6,600, received RUS funding to expand sewer lines to connect residents to a wastewater treatment plant. Similarly, Laurel County Water District No. 2, which provides potable water to about 17,000 residents who live in unincorporated rural areas of southeastern Kentucky between the cities of London, Kentucky, and Corbin, Kentucky, received RUS funding to upgrade a water treatment plant to accommodate potential growth opportunities in the area. Table 3 provides the number of RUS funded rural water supply and wastewater projects by state for fiscal years 2004 through 2006. To apply for RUS funding for a water supply or wastewater project, a community must submit a formal application. Once the formal application is submitted, communities then compete for funding with other projects throughout the state. In general, RUS officials in the state office rank each proposed project according to the project’s ability to alleviate a public health issue, the community’s median household income, and other factors. As applications are reviewed and ranked on a rolling basis, RUS officials in the state office generally decide which projects will receive funding until all funds are obligated for the fiscal year. RUS provides both grants and loans for eligible projects, and communities must meet certain requirements depending upon the type of assistance they are requesting. For example, RUS grants can be used to finance up to 75 percent of a project’s cost based on a number of factors including a community’s financial need and median household income. Alternatively, to receive a loan, the community must certify in writing, and RUS must determine, that the community is unable to finance the proposed project from their own resources or through commercial credit at reasonable rates and terms. For projects also funded through RUS loans, RUS requires the community to charge user fees that, at a minimum, cover the costs of operating and maintaining the water system while also meeting the required principal and interest payments on the loan. For example, RUS provided the Wood Creek Water District, located in Laurel County, Kentucky, a $1 million grant and a $7.98 million loan for a major water treatment plant expansion. A Wood Creek official told us that the water district had attempted to obtain a loan from a commercial lender; however, the loan would have had an interest rate of 7 percent and a term of 20 years, which would have rendered the project financially unfeasible. According to RUS, Wood Creek was able to receive a loan with an interest rate of 4.3 percent and a term of 40 years, thereby significantly reducing the annual loan payments. RUS also required Wood Creek to slightly increase its user fees to support the operation and maintenance of the water system and cover the loan repayment. EDA’s eligibility criteria require water supply or wastewater projects to be located in an economically distressed area, regardless of the area’s population size. EDA defines an area as economically distressed if it meets one of the following three conditions: the area has (1) an unemployment rate that is at least 1 percent greater than the national average, (2) a per capita income that is 80 percent or less of the national average, or (3) has experienced or is about to experience a special need arising from changes in economic conditions. The project must also create or retain long-term private sector jobs and/or attract private capital investment. For example, Assumption Parish Waterworks District No.1 in Napoleonville, Louisiana, received EDA funding to upgrade water service to two sugarcane mills. The community qualified for the funding because Assumption Parish met EDA’s criteria for unemployment and per capita income. The water supply project allowed the sugarcane mills to maintain and expand their operations, saving 200 existing jobs, creating 17 new jobs, and attracting $12.5 million in private investment. Table 4 provides the number of EDA funded rural water supply and wastewater projects by state for fiscal years 2004 through 2006. To apply for EDA funding for a water supply or wastewater project, the community must submit a preapplication to an EDA Regional Office. If the proposed project is found eligible, the community must then submit a formal application to an EDA Regional Office. The Regional Office then prioritizes and makes funding decisions that are forwarded to EDA headquarters for approval. These decisions are based upon, among other things, how the project promotes innovative, entrepreneurial, or long-term economic development efforts. EDA applications are reviewed on a rolling basis, and funding decisions are made until all of the funds for the fiscal year are obligated. EDA provides grants for eligible projects that may finance 50 to 100 percent of a project’s total costs based on a number of factors including an area’s level of economic distress. For example, the London-Laurel County Industrial Development Authority located in Laurel County, Kentucky, qualified for an EDA grant because the county has a per capita income of $14,165, which is 66 percent of the national average. Because Laurel County’s per capita income was between 60 to 70 percent of the national average, EDA’s grant could fund no more than 60 percent of the project’s total cost. The project received a $950,000 grant, which covered 50 percent of the $1.9 million total project cost to construct water and sewer line extensions for an industrial park. The new occupants of this industrial park were expecting to create 425 new jobs and provide $20.9 million in private investment. Reclamation and the Corps have not historically had rural water supply and wastewater programs; rather they have provided funding to specific projects or programs in certain geographic locations under explicit congressional authorizations. Although the Corps continues to provide assistance to projects under specific congressional authorizations, many of which are pilot programs, the Rural Water Supply Act of 2006 directed Reclamation to establish a rural water supply program with standardized eligibility criteria. Reclamation provides grants to individual rural water supply projects in eligible communities for which the Congress has specifically authorized and appropriated funds. These grants finance varying amounts of a project’s total costs depending upon the specific authorization. According to a program assessment conducted by the Office of Management and Budget (OMB), the Congress has chosen Reclamation to fill a void for projects that are larger and more complex than other rural water projects and which do not meet the criteria of other rural water programs. For example, the Mni Wiconi Project Act of 1988, as amended, directs Reclamation to provide funding to three Indian tribes and seven counties for a rural water supply project in South Dakota that encompasses 16 percent of state’s total land area. For the Mni Wiconi project, Reclamation grants provide funding for 100 percent of the project costs on Indian lands and 80 percent of the project costs on non-Indian lands. Table 5 provides the number of Reclamation funded rural water supply projects by state for fiscal years 2004 through 2006. While rural water supply projects are outside of Reclamation’s traditional mission, according to Reclamation officials, the agency became involved in such projects because individual communities or groups of communities proposed projects directly to the Congress. In response, the Congress created specific authorizations for these rural water supply projects, and Reclamation was designated responsibility for funding and overseeing the construction of the projects. Because Reclamation is responding to Congressional direction in implementing these projects, it has not established eligibility criteria for communities or prioritized these projects for funding. In a May 11, 2005 testimony, the Commissioner of the Bureau of Reclamation indicated that the agency would like more authority to plan and oversee the development and construction of rural water supply projects. In 2006, the Congress passed the Rural Water Supply Act directing Reclamation to develop a rural water supply program. Within 1 year, Reclamation was required to develop standardized criteria to determine eligibility requirements for rural communities and prioritize funding requests under this program. Further, the act directed Reclamation to assess within 2 years how the rural water supply projects funded by Reclamation will complement those being funded by other federal agencies. Reclamation is now beginning to address these requirements, including: (1) developing programmatic criteria to determine eligibility for participation and (2) assessing the status of authorized rural water supply projects and other federal programs that address rural water supply issues. According to a Reclamation official, the agency plans to complete these requirements by August 2008 and December 2008, respectively. Reclamation officials also said the development of a rural water supply program will, among other things, allow Reclamation to be directly involved in the planning, design, and prioritization of rural water supply projects and provide recommendations to the Congress regarding which projects should be funded for construction. Projects recommended for funding by Reclamation must still receive a specific congressional authorization for design and construction. The Corps funds rural water supply and wastewater projects under specific congressional authorizations, many of which are pilot programs, and makes funding available to specific communities or programs in certain geographic areas. For example, a section of the Water Resources Development Act of 1999, as amended, authorized a pilot program that directed the Corps to provide funding to water supply and wastewater projects to communities in Idaho, Montana, rural Nevada, New Mexico, and rural Utah. When directed to fund these types of projects, the Corps provides either grants or reimbursements for project costs incurred by the community. To receive reimbursements, a community submits invoices received from its contractors to the Corps, and the Corps generally reimburses the community up to 75 percent of project costs. Table 6 provides the number of Corps funded rural water supply and wastewater projects by state for fiscal years 2004 through 2006. Even though the Corps provides congressionally directed funding to specific geographic areas through these pilot programs, eligibility criteria and the degree to which projects compete for funding can differ between programs. For example, the Corps’ Southern and Eastern Kentucky Environmental Improvement Program is available only to communities located in 29 counties in southeastern Kentucky. The program requires these communities to submit formal applications, which are prioritized and ranked annually against all received applications. The Corps, in conjunction with a nonprofit organization, selects projects for funding based on certain factors such as economic need. For example, the Wood Creek Water District submitted a formal application and received approximately $500,000 in reimbursements––about 72 percent of the total project costs––to extend sewer service to a school and 154 households who live near the school. In contrast, the Corps’ Rural Utah Program is available to communities in 24 counties and part of another county that the Congress designated as rural. This program requires communities in these counties to submit a request letter that includes, among other things, a brief project description and an estimate of total project costs. Request letters are considered for funding on a rolling basis by Corps officials, and no other formal eligibility criteria exist. For example, Park City, Utah, submitted a letter that provided a project description and the estimated total cost for the project. According to a Corps official, the Corps evaluated the letter and provided approximately $300,000 in reimbursements––or about 60 percent of the total project costs––for the replacement of water and sewer lines in Park City’s Old Town area. While the Corps funds projects carried out under these pilot programs as directed by the Congress, it does not request funds for them as part of its annual budget process because, according to Corps officials, these types of projects fall outside the Corps’ primary mission of navigation, flood control, and ecosystem restoration. This position was reiterated in a May 11, 2007, policy document released by OMB, which stated that funding of such local water supply and wastewater projects is outside of the Corps’ mission, costs taxpayers hundreds of millions of dollars, and diverts funds from more meritorious Corps Civil Works projects. When the Congress authorized the Corps to fund these various pilot programs, it also required the agency to evaluate the effectiveness of several of them and recommend to the Congress whether these pilot programs should be implemented on a national basis. The Corps has completed 9 of the 12 required evaluations. Of the completed evaluations, only four made recommendations––all in favor of the establishment of a national program. The other five evaluations either did not make the required recommendation or stated that the agency had not yet funded enough projects to effectively evaluate the program. However, we found that between fiscal years 2004 and 2006, the Corps provided funding to over 100 rural water supply and wastewater projects under pilot programs, and it is unclear why the Corps has still not completed all of the evaluations required by the Congress. In the absence of the outstanding evaluations and recommendations, the Congress does not have information on whether, collectively, the projects carried out under the Corps’ pilot programs merit continued funding, duplicate other agency efforts, or should be implemented on a national basis. The Congress has determined that RUS, EDA, and now Reclamation should provide funding for rural water projects as part of their overall missions and target federal assistance to certain communities based on their population size, economic need, or geographic location. However, for the Corps, the Congress has not yet determined whether funding of rural water supply projects should permanently be included within the agency’s water portfolio. To help inform congressional decision making on this issue, the Corps was required to evaluate its various water supply and wastewater pilot programs and recommend to the Congress whether these programs should be continued. However, the Corps has not consistently provided the information required by the Congress even though it has completed over 100 rural water projects under various pilot programs. As a result, the Congress does not have the information it needs to determine whether the Corps’ projects meet a previously unmet rural water need or duplicate the efforts of other agencies. Such information is important for making decisions on how to allocate limited federal resources in a time when the nation continues to face long-term fiscal challenges. To ensure that the Congress has the information it needs to determine whether the Corps should continue to fund rural water supply and wastewater projects, we recommend that the Secretary of Defense direct the Commanding General and the Chief of Engineers of the U.S. Army Corps of Engineers to provide a comprehensive report on the water supply and wastewater projects that the Corps has funded under its pilot programs and determine whether these pilot programs duplicate other agency efforts and should be discontinued, or whether these pilot programs address an unmet need and should be expanded and made permanent at a national level. We provided the Departments of Agriculture, Commerce, Defense, and the Interior with a draft of this report for review and comment. The Department of Defense concurred with GAO’s findings and recommendation, and its written comments are included in appendix III. The Department of the Interior also agreed with GAO’s findings, and its written comments are included in appendix IV. The Departments of Agriculture and Commerce provided us with technical comments, which we have incorporated throughout the report, as appropriate. We will send copies of this report to interested congressional committees; the Secretaries of Agriculture, Commerce, Defense, and the Interior; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-3841, or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. To determine how much federal funding the U.S. Department of Agriculture’s Rural Utilities Service (RUS), the Department of Commerce’s Economic Development Administration (EDA), the Department of the Interior’s Bureau of Reclamation (Reclamation), and the U.S. Army Corps of Engineers (Corps) obligated for rural water supply and wastewater projects for fiscal years 2004 through 2006, we collected and analyzed obligation and project location data submitted by each agency. We determined that the data were sufficiently reliable for the purposes of this report. To identify water supply and wastewater projects that were located in rural areas, we applied the definition of rural used by RUS, EDA, and Reclamation to the geographic location each agency provided for its water supply and wastewater projects. Because the Corps does not have a definition for rural areas, we asked the Corps to use the U.S. Census Bureau’s density-based urban and rural classification system to identify projects that it funds in rural areas. This classification system divides geographical areas into urban areas, urban clusters, and nonurban areas and clusters. Using this information, we determined that Corps funded water supply and wastewater projects were in rural areas if they were located in: (1) any nonurban areas or clusters, (2) urban clusters with a population of less than 20,000, and (3) areas of Nevada and Utah that the Congress specifically defined as rural in the Water Resources Development Act of 1999, as amended. Table 7 provides the definition of rural area used by each agency for water supply and wastewater projects. To determine the extent to which each RUS, EDA, Reclamation, and the Corps eligibility criteria and the projects they fund differed, we reviewed and analyzed applicable statutes, agency regulations, and policy guidance. In addition, we used a nonprobability sample to select 16 rural water supply and wastewater projects, including at least one project funded by each of the four agencies, and conducted site visits to each of the selected projects. These projects were selected based upon project type (water supply or wastewater), geographic location, type of assistance (loan, grant, or a combination of these) and the federal agency funding the project. During the site visits, we interviewed local officials from the communities receiving funding and federal agency officials responsible for managing the funding of those projects. We also collected and analyzed project-specific documentation such as applications and letters of intent. Table 8 lists the 16 projects we selected for site visits and the type of project, location, type of assistance, and funding agency(ies) for each project. To determine the overhead costs and number of personnel needed to support rural water supply and wastewater projects, we collected and analyzed agency policy guidance and interviewed agency officials to determine the extent to which RUS, EDA, Reclamation, and the Corps tracks these data for rural water supply and wastewater projects. We also requested these data from each agency to the extent they could provide them to us. We conducted our work from September 2006 through August 2007 in accordance with generally accepted auditing standards. The U.S. Department of Agriculture’s Rural Utilities Service (RUS), the Department of Commerce’s Economic Development Administration (EDA), the Department of the Interior’s Bureau of Reclamation (Reclamation), and the U.S. Army Corps of Engineers (Corps) each calculate their overhead costs, commonly referred to as general and administrative (G&A) costs, and the number of personnel needed to manage rural water supply and wastewater projects, referred to as full- time equivalents (FTE), differently. This appendix describes how each agency calculates these costs for rural water supply and wastewater projects. RUS and EDA each receive separate appropriations to fund their agencywide G&A costs. These agencies do not track these costs or FTEs on a project-by-project basis. Therefore, we were unable to calculate each agencies total G&A costs and total FTEs by rural water supply and wastewater project. Reclamation divides water supply project costs into two categories, direct costs and indirect costs. According to Reclamation, if all activities are correctly and consistently charged, then all activities assigned to indirect costs can be considered overhead costs for a project. Although a standard formula is used to determine indirect cost rates, which are applied as a percentage of labor, Reclamation officials stated that the rates may vary by area office and region depending primarily on the amount of costs that can be charged directly to a project. Furthermore, according to documentation provided by Reclamation officials, these indirect cost rates were updated each fiscal year. As can be seen in table 9, Reclamation provided the following indirect costs and FTE estimates for the 11 rural water projects for which Reclamation obligated funds for fiscal years 2004 through 2006. The Corps’ G&A costs for its headquarters and divisions are funded through a general expenses appropriation. G&A costs at the district level are distributed to projects and programs through the use of predetermined rates established by the district Commander at the beginning of each fiscal year and are automatically distributed to specific projects or programs based on the direct labor charged to the projects or programs. There are two types of overhead costs charged by the districts, general and administrative overhead and departmental overhead. General and administrative overhead includes administrative and support costs incurred in the day-to-day operations of a district. Departmental overhead includes costs incurred within technical divisions at the district headquarters that are not attributable to a specific project or program. While a standard formula is used to determine overhead rates, these rates may vary by district depending on a variety of factors including, geographic location—an office in a high cost area will cost more to operate than a similar office in a rural area, and composition of the workforce—an office staffed by senior-level employees will cost more to operate than an office staffed by junior-level employees. The Corps G&A costs and FTE data for its water supply and wastewater projects are calculated at the program level and cover projects in both rural and urban areas. The Corps could not readily provide these data for obligations on a rural water supply and wastewater project basis. In addition to the individual named above, Ed Zadjura, Assistant Director; Patrick Bernard; Diana Goody; John Mingus; Lynn Musser; Alison O’Neill; Matthew Reinhart; and Barbara R. Timmerman made significant contributions to this report.
funds for constructing and upgrading water supply and wastewater treatment facilities. As a result, they typically rely on federal grants and loans, primarily from the Rural Utilities Service (RUS), Economic Development Administration (EDA), Bureau of Reclamation (Reclamation), and the U.S. Army Corps of Engineers (Corps), to fund these projects. Concern has been raised about potential overlap between the projects these agencies fund. For fiscal years 2004 through 2006 GAO determined the (1) amount of funding these agencies obligated for rural water projects and (2) extent to which each agency's eligibility criteria and the projects they fund differed. GAO analyzed each agency's financial data and reviewed applicable statutes, regulations, and policies. From fiscal years 2004 through 2006, RUS, EDA, Reclamation, and the Corps obligated nearly $4.7 billion to about 3,100 rural water supply and wastewater projects. RUS obligated the majority of these funds--about $4.2 billion--to about 2,800 projects. Of this $4.2 billion, RUS loans accounted for about $2.7 billion, and RUS grants accounted for about $1.5 billion. EDA, Reclamation, and the Corps, combined, obligated a total of about $500 million in grants to rural communities for about 300 water projects. RUS, EDA, Reclamation, and the Corps fund similar rural water supply and wastewater projects, but they have varied eligibility criteria that limit funding to certain communities based on population size, economic need, or geographic location. RUS, EDA, and the Corps provide funding for both water supply and wastewater projects, while Reclamation provides funding only for water supply projects. Eligible water projects can include constructing or upgrading distribution lines, treatment plants, and pumping stations. RUS and EDA have formal nationwide programs with standardized eligibility criteria and processes under which communities compete for funding. In contrast, Reclamation and the Corps fund water projects in defined geographic locations under explicit congressional authorizations. In 2006 the Congress passed the Rural Water Supply Act, directing Reclamation to develop a rural water supply program with standard eligibility criteria. The Corps continues to fund rural water supply and wastewater projects under specific congressional authorizations, many of which are pilot programs. The Congress required the Corps to evaluate the effectiveness of these various pilot programs and recommend whether they should be implemented on a national basis. The Corps has only completed some of the required evaluations and, in most cases, has not made the recommendations that the Congress requested about whether or not the projects carried out under these pilot programs should be implemented on a national basis.
Established in 1965, HUD is the principal federal agency responsible for the programs dealing with housing and community development and fair housing opportunities. Among other things, HUD’s programs provide (1) mortgage insurance to help families become homeowners and to help provide affordable multifamily rental housing for low- and moderate-income families, (2) rental subsidies for lower-income families and individuals, and (3) grants and loans to states and communities for community development and neighborhood revitalization activities. HUD’s fiscal year 1997 budget proposal requests about $22 billion in discretionary budget authority and plans about $33 billion in discretionary outlays. Compared with HUD’s fiscal year 1996 appropriation, this request represents about a 7-percent increase in budget authority and 10 percent increase in outlays. HUD believes that this increase in outlays between fiscal years 1996 and 1997 is somewhat misleading. For example, 1996 outlays were unusually low because HUD expended $1.2 billion—which normally would have been disbursed early in fiscal year 1996—in late fiscal year 1995 because of the government shutdown. In addition, reforms in the mortgage assignment program generated a significant one-time savings of over $1 billion in fiscal year 1996 (under credit reform as scored by the Congressional Budget Office). HUD’s March 1995 blueprint, HUD Reinvention: From Blueprint to Action, proposed to merge 60 of its 240 separate programs into three performance-based funds that would be allocated directly to the states and localities. HUD’s objectives were to provide communities with greater flexibility and instill a level of accountability in its programs through the use of performance measures and a series of rewards and incentives. As of March of this year, few of the proposals in this reinvention document have been adopted. HUD’s second reinvention proposal, Renewing America’s Communities from the Ground Up: The Plan to Continue the Transformation of HUD, also known as Blueprint II, would supersede the first proposal but continue the move toward accountability by fiscal year 1998 by (1) consolidating over 20 community development programs into three performance funds where high-performing grant recipients would be awarded bonuses, (2) replacing 15 separate public housing programs with two performance funds, and (3) consolidating the 14 existing voucher and certificate funds. Appendix II summarizes HUD’s plans to fund the proposals in Blueprint II through its fiscal year 1997 budget request. HUD’s fiscal year 1997 budget request discusses how a planned, major restructuring of the multifamily housing program is likely to affect its budget over the next 6 years and beyond. The restructuring is aimed at addressing serious and longstanding problems affecting properties with HUD-insured mortgages that also receive rental subsidies tied to units in the properties (project-based assistance). HUD deserves credit for attempting to address these complex problems. However, HUD’s assumptions about its ability to quickly restructure properties with high subsidy costs appear overly optimistic and could be responsible for HUD underestimating its request for rental assistance for low-income families. According to HUD’s latest data, 8,636 properties with about 859,000 apartments would be subject to the restructuring proposal; the unpaid loan balances for these properties total about $17. 8 billion. In many cases, HUD pays higher amounts to subsidize properties than are needed to provide the households living in them with decent, affordable housing. In other cases, rents set by HUD are lower than required to maintain the properties’ physical condition, contributing to poor living conditions for families with low incomes. Initially termed “mark to market” in last year’s budget request, and now referred to as “multifamily portfolio reengineering,” the goal and general framework of HUD’s proposal remain the same: eliminate excess subsidy costs and improve the poor physical condition of some of the properties by relying primarily on market forces. Specifically, for properties with mortgages insured by FHA that also receive project-based assistance, HUD has been proposing to let the market set the property rents to market levels and reduce mortgage debt if necessary to permit a positive cash flow. In addition, HUD has proposed replacing project-based rental subsidies with portable tenant-based subsidies, thereby requiring the properties to compete in the marketplace for residents. While maintaining this general framework, HUD made several changes to its proposal this year. For example, under the initial proposal all rents would have been reset to market levels whether the market rents were above or below the subsidized rents. The current proposal gives priority attention initially to properties with subsidized rents above market. In addition, HUD plans to let state and local governments decide whether to continue with project-based rent subsidies after mortgages are restructured or to switch to tenant-based assistance. HUD has also indicated that it will allow owners to apply for FHA insurance on the new, restructured mortgage loans, whereas last year the proposal expressly disallowed FHA insurance on restructured loans. We are currently evaluating a study by Ernst & Young LLP released on May 2, 1996, that was designed to provide the Department with current information on HUD’s multifamily portfolio. This information could form the basis for the improvement of key assumptions needed to estimate the net savings or costs associated with the reengineering proposal. In this regard, HUD’s contract with Ernst & Young LLP requires that the firm update HUD’s information on (1) market rents versus the project-based rents that the agency subsidizes and (2) the physical condition of the properties.These two variables strongly influence whether a property can operate at market rents without debt reduction or what amount of debt reduction is needed to cover the property’s expenses. Having good data on these variables will allow FHA to better develop claims estimates which will be based on the amount of debt write-down. In addition, the rent data are integral to estimating the change in subsidy costs if the project-based rents are replaced with market rents and the residents receive tenant-based assistance. HUD also tasked Ernst & Young with developing a financial model that would show the likely result of reengineering the portfolio and identify the related subsidy costs and claims costs. The results of the Ernst & Young study were not available when the fiscal year 1997 budget was being developed. Because HUD lacked the project-specific data contained in the Ernst & Young study, HUD used assumptions in some cases that represent the Department’s “best guess” as to outcome. These assumptions can affect the budgetary savings HUD expects to result from reengineering the portfolio. Ernst & Young’s May 2, 1996, report presents information on projects that are expected to be affected by this reengineering. While the report did not directly discuss subsidy and claims costs, we are currently reviewing the results of this study and its cost implications. We plan to issue our report on the Ernst & Young study this summer. On the basis of our ongoing work, we believe that some of the assumptions HUD used may overstate the projected savings associated with reengineering the portfolio. We cannot, however, determine the extent of that overstatement at this time. One of HUD’s assumptions is that a substantial number of mortgages with excess subsidy costs will be restructured well ahead of the dates that their rental assistance contracts expire. Although the extent to which HUD will be able to accomplish this remains unclear, this assumption appears optimistic and HUD’s budget request may understate its need for funding to renew section 8 rental assistance contracts for fiscal year 1997 and beyond. In its fiscal year 1997 budget, HUD requested $845 million in bonus funding for high-performing grantees in four of its six new block grants. HUD calls the block grants “performance funds.” HUD believes that these grants will provide communities with greater flexibility to design local solutions to local problems. HUD plans to competitively award bonuses to grantees who exceed the established performance measures and who submit project proposals. (App. III summarizes the details of the proposed bonus pools.) We generally support performance measurement as a method of building accountability into block grants because it would allow grantees to achieve objectives while also vesting them with responsibility for their choices. Moreover, HUD’s development of block grants and performance measures would be consistent with the underlying principles of the Government Performance and Results Act and recommendations for program consolidation made by the National Performance Review. However, the characteristics of the block grants themselves—their program breadth and the flexibility allowed the grantees—will greatly complicate and add significant time to HUD’s development of uniform performance measures. HUD is still in the early stages of developing such measures, however, and without them grantees will have difficulty understanding HUD’s objectives and performance measurement process. Moreover, because of inadequate information systems to support performance measurement, we question whether HUD’s request for bonus funding can be effectively used during fiscal year 1997. Some features inherent to block grants will complicate the implementation of a performance measurement system in fiscal year 1997. These complications would result in extending beyond fiscal year 1997 the time HUD needs to develop adequate measures. We have reported in the past, for instance, that the flexibility and wide latitude allowed grantees make common and comparative measurement very difficult. HUD will need to collaborate with the states to develop performance measures and establish reporting requirements. These entities’ interests could vary markedly because HUD would be looking to meet national objectives, while the states are trying to meet local needs. Not only do the federal and state interests differ, but it will take time for both to develop data collection systems and reporting capacities once the initial decisions are made. In addition, measurement is complicated because all observed outcomes cannot be assumed to result from the programs and activities under scrutiny. Some outcomes, such as job creation, will be affected by factors outside of the control of program participants, while other desired outcomes, such as enhanced quality of life for residents, may not be quantifiable. Moreover, our work on block grants at other federal agencies has shown that many of these agencies lack the ability to track progress, evaluate results, and use performance data to improve their agencies’ effectiveness. For example, HUD’s Inspector General ( IG) recently found that HUD is just beginning to develop a Department-wide strategic plan, the key underpinning and starting point for the process of program goal-setting and performance measurement that the Government Performance and Results Act seeks to establish throughout the federal government. Program performance information comes from sound, well-run information systems that accurately and reliably track actual performance against the standards or benchmarks. Our work has shown, however, that HUD’s information systems may not be adequate to support the implementation of the four bonus pools. For example, HUD is proposing a $500 million bonus fund as part of its public housing capital fund. As a requirement for eligibility, housing authorities would have to have earned high scores in the Public Housing Management Assessment Program (PHMAP) and have undertaken substantive efforts to link residents with education and job training. However, HUD generally does not confirm the scores of high scoring housing authorities—many of the data to support the scores are self-reported—and generally accepts the scores as accurate. Our analysis, as well as that of the HUD IG and others, has cast doubt on the accuracy of PHMAP scores for some housing authorities. Three major public housing industry associations also share concerns about PHMAP’s use as a tool for awarding bonuses. And finally, HUD itself recently acknowledged that PHMAP scores should not be considered the sole measure of a public housing authority’s performance, noting that circumstances can exist in which the best decision a housing authority can make is not always the one that yields the highest PHMAP score in the short term. We believe, therefore, that PHMAP—as it is currently implemented—should not be used as a basis for awarding bonuses to public housing authorities. HUD has said that it intends to draw on its Empowerment Zone/Enterprise Community (EZ/EC) experience with benchmarking to move toward performance-based funding for all HUD programs. However, HUD officials said that developing benchmarks for the first round of EZ/EC grants was a difficult task and they recognize that HUD could have done a better job of explaining the process of developing benchmarks to communities. Given this difficulty and the complications mentioned earlier, we are concerned that HUD is still in the midst of developing its bonus program and measures for its performance funds. In its fiscal year 1997 budget, the Department is requesting $11 million for its Office of Policy Development and Research to continue developing quantifiable measures for each major program, a process for setting benchmarks with grantees, and improvements in how the Department uses information on program performance. Because this development is ongoing, the measures and the processes will not be in place and known to the grantees before HUD uses them to award bonuses with fiscal year 1997 funds. HUD officials believe that bonus funding needs to be offered during fiscal year 1997 to encourage the states and localities to seek higher performance and that the details will be worked out as the program is implemented. We believe that timing is critical in this matter. For the performance bonuses to have equity and merit, HUD needs to be able to specify prior to the year over which performance is measured what results and outcomes will be rewarded and how they will be measured. As we have reported, four long-standing, Department-wide management deficiencies led to our designation of HUD as a high-risk area in January 1994. These deficiencies were weak internal controls, an ineffective organizational structure, an insufficient mix of staff with the proper skills, and inadequate information and financial management systems. In February 1995, we reported that HUD’s top management had begun to focus attention on overhauling the Department’s operations to correct these management deficiencies. In that report, we outlined actions that the agency needed to take to reduce the risk of waste, fraud, and abuse. In reviewing the proposed 1997 budget, we found budgetary support for the implementation of several of these recommendations. First, we recommended consolidating programs to give the communities greater flexibility in applying for funds and reducing administrative burden. The 1997 budget proposes the consolidation of many individual programs, either now or in the near future, into block grant programs to increase participants’ flexibility. HUD is beginning to develop performance measures for many programs to assess the participants’ progress. Second, we recommended that HUD be authorized to use more innovative initiatives to leverage private investment in community development and affordable housing. Several HUD programs will now or in the future involve mechanisms such as grant proposals or loan programs that will require either participation or investment by private organizations. In addition, FHA proposes creating new mortgage products that would expand homeownership and that would share risk with other entities. Third, we recommended that HUD continue to strengthen and coordinate its long-range planning. The budget proposal describes new investments to upgrade and expand its computer systems to specifically support implementation of Blueprint II. HUD anticipates that the proposed investments will improve efficiency and reduce operating costs. However, HUD’s budget proposes several new, specialized initiatives that seem to run counter to the agency’s consolidation efforts to, as described in Blueprint II, “sweep away the clutter of separate application procedures, rules and regulations that has built up at HUD over the past 30 years.” For example, HUD is requesting $290 million for its Housing Certificate Fund to assist several groups of people needing preferred housing. These programs include the Welfare-to-Work initiative and housing for homeless mothers with children. However, this funding request is inconsistent with Blueprint II, in which HUD urges the Congress to do away with the statutes that require such preferences. Although the Department deserves credit for its continuing resolve in addressing its long-standing management deficiencies, HUD’s recently initiated actions are far from reaching fruition, and the agency’s problems continue. In addition, specialized programs are beginning to reappear, and they may undermine the major restructuring of the agency, reduce efficiency, and increase administrative burdens. Therefore, we believe that both now and for the foreseeable future, the agency’s programs will continue to be high-risk in terms of their vulnerability to waste. Our statement today discussed several issues that will affect HUD’s programs and their need for appropriations. We identified new issues and highlighted changes in other issues on which we have previously testified. By continuing to focus on improving its internal management and coming to closure on how and when it will use the market to eliminate excess subsidy costs and improve the poor physical conditions of its assisted multifamily housing, HUD will be better able to use additional appropriations and implement new policy. Although HUD has recognized many of its management deficiencies and has budgeted funds to address them, we see this as a long-term effort that will continue into the foreseeable future. In connection with the proposed bonus pools, the lack of adequate performance measures and associated information systems leads us to question the basis for awarding additional funding at this time. While HUD officials believe that the details of awarding bonuses will be worked out as the program is implemented, we believe that they are overly optimistic, given the magnitude of the bonus pools and the complexity of developing appropriate performance measures. We recommend that the Congress consider not appropriating the $845 million for HUD’s proposed bonus pool funding until the Department develops adequate performance measures and supporting information systems to ensure that these funds are used effectively. Housing and Urban Development: Limited Progress Made on HUD Reforms (GAO/T-RCED-96-112, Mar. 27, 1996). FHA Hospital Mortgage Insurance Program: Health Care Trends and Portfolio Concentration Could Affect Program Stability (GAO/HEHS-96-29, Feb. 27, 1996). GPRA Performance Reports (GAO/GGD-96-66R, Feb. 14, 1996). Homeownership: Mixed Results and High Costs Raise Concerns About HUD’s Mortgage Assignment Program (GAO/RCED-96-2, Oct. 18, 1995). Multifamily Housing: Issues and Options to Consider in Revising HUD’s Low-Income Housing Preservation Program (GAO/T-RCED-96-29, Oct. 17, 1995). Housing and Urban Development: Public and Assisted Housing Reform (GAO/T-RCED-96-25, Oct. 13, 1995). Block Grants: Issues in Designing Accountability Provisions (GAO/AIMD-95-226, Sept. 1, 1995). Property Disposition: Information on HUD’s Acquisition and Disposition of Single-Family Properties (GAO/RCED-95-144FS, July 24, 1995). Housing and Urban Development: HUD’s Reinvention Blueprint Raises Budget Issues and Opportunities (GAO/T-RCED-95-196, July 13, 1995). Public Housing: Converting to Housing Certificates Raises Major Questions About Cost (GAO/RCED-95-195, June 20, 1995). Government Restructuring: Identifying Potential Duplication in Federal Missions and Approaches (GAO/T-AIMD-95-161, June 7, 1995). HUD Management: FHA’s Multifamily Loan Loss Reserves and Default Prevention Efforts (GAO/RCED/AIMD-95-100, June 5, 1995). Program Consolidation: Budgetary Implications and Other Issues (GAO/T-AIMD-95-145, May 23, 1995). Government Reorganization: Issues and Principles (GAO/T-GGD/AIMD-95-166, May 17, 1995). Managing for Results: Steps for Strengthening Federal Management (GAO/T-GGD/AIMD-95-158, May 9, 1995). Multiple Employment Training Programs: Most Federal Agencies Do Not Know If Their Programs Are Working Effectively (GAO/HEHS-94-88, Mar.2, 1994). Multifamily Housing: Better Direction and Oversight by HUD Needed for Properties Sold With Rent Restrictions (GAO/RCED-95-72, Mar. 22, 1995). Block Grants: Characteristics, Experience, and Lessons Learned(GAO/HEHS-95-74, Feb. 9, 1995). High-Risk Series: Department of Housing and Urban Development (GAO/HR-95-11, Feb. 1995). Program Evaluation: Improving the Flow of Information to the Congress (GAO/PEMD-95-1, Jan. 30, 1995). Housing and Urban Development: Major Management and Budget Issues (GAO/T-RCED-95-86, Jan. 19, 1995, and GAO/T-RCED-95-89, Jan. 24, 1995). Federally Assisted Housing: Expanding HUD’s Options for Dealing With Physically Distressed Properties (GAO/T-RCED-95-38, Oct. 6, 1994). Rural Development: Patchwork of Federal Programs Needs to Be Reappraised (GAO/RCED-94-165, July 28, 1994). Federally Assisted Housing: Condition of Some Properties Receiving Section 8 Project-Based Assistance Is Below Housing Quality Standards (GAO/T-RCED-94-273, July 26, 1994, and Video, GAO/RCED-94-01VR). Public Housing: Information on Backlogged Modernization Funds (GAO/RCED-94-217FS, July 15, 1994). Homelessness: McKinney Act Programs Provide Assistance but Are Not Designed to Be the Solution (GAO/RCED-94-37, May 31, 1994). Grantees will use their formula funds for the present wide range of activities eligible under CDBG, but two new features added—performance measures and benchmarks, and a bonus pool. The bonus pool will be devoted exclusively to job creation and economic revitalization efforts. The budget proposes $4.6 billion for the CDBG fund in 1997. In addition, $300 million is requested for a second round of Empowerment Zone/Enterprise Communities grants ($200 million) and a competitive Economic Development Challenge Grant ($100 million) for high-performing jurisdictions. Grantees will use their formula funds to expand the supply of affordable housing. The fund will require grant recipients to set their own performance measures and benchmarks. Ten percent of the fund will be set aside as a bonus pool to create large tracts of homeownership in communities. The budget proposes a total of $1.55 billion for HOME in 1997, including $1.4 billion for the HOME Fund and $135 million for the HOME Fund Challenge Grant for Homeownership Zones. The Budget also proposes to use $15 million of funds provided for the HOME Fund for Housing Counseling. The HAF will allow grantees to shape a comprehensive, flexible, coordinated “continuum of care” approach to solving rather than institutionalizing homelessness. Ten percent of the fund will be set aside as a bonus pool. The budget proposes $1.12 billion for the HAF in 1997. Of this total, $1.01 billion will be for a consolidated needs-based homeless assistance program, and the remaining $110 million will be for the Homeless/Innovations Challenge Grant. HUD will re-propose consolidating several programs (i.e., drug elimination grant, service coordinators) into one Operating Fund by FY 1998. All existing eligible uses under these funds, plus expanded anti-crime activities, will be permitted under the Operating Fund. The budget proposes $2.9 billion for the Operating Fund, an increase of $100 million over the anticipated $2.8 billion for fiscal year 1996. Public Housing Capital FundHUD will re-propose consolidating a series of separate programs into one Capital Fund by FY 1998. This new Fund will largely be modeled after the current modernization program. Eligible activities will include those currently eligible under modernization programs, under programs for distressed public housing developments, and under the development and Family Investment Center Programs. HUD will set aside 10 percent of the Capital Fund as a bonus pool. HUD plans to jump start the Campus of Learners initiatives in fiscal year 1996 by requiring all applications for redevelopment under the public housing capital programs to build in educational, technological, and job linkages. PHA’s will need to build viable partnerships with local educational and job placement institutions to be eligible for funding. The budget proposes an appropriation of $3.2 billion for the Capital Fund in 1997. Two-hundred million will be made available for Indian housing construction. The budget assumes that $500 million will be made available in a separate account for a Capital Bonus Fund. The budget does not allocate a specific dollar amount to be used for the Campus of Learners initiative. However, PHA’s are encourage to use capital funds to advance this endeavor. (continued) HUD will re-propose consolidating the existing voucher and certificate funds into one performance-based Certificate Fund. The Certificate Fund will be HUD’s principal tool for addressing what HUD considers the primary source of severe housing problems in the nation: lagging household incomes and high housing costs. The budget is requesting an appropriation of $290 million for fiscal year 1997 for the Certificate Fund for 50,000 incremental units, of which 30,000 units will be used to help families make a transition to work (25,000 units) and help homeless mothers with children obtain housing (5,000 units). The additional 20,000 units will be used for tenant protection to support families in FHA-insured assisted housing projects directly affected by prepayment, disposition or restructuring. The Community Development Block Grant Fund will comprise the CDBG and Economic Development Challenge Grant. The HOME Fund comprises the Home Investment Partnership Program (HOME), and the HOME Fund Challenge Grant. The Homeless Assistance Fund will consolidate HUD’s six McKinney homeless assistance programs-Shelter Plus Care, Supportive Housing, Emergency Shelter Grants, Section 8 Moderate Rehabilitation (Single Room Occupancy), Rural Homeless Grants, and Safe Havens, as well as the Innovative Homeless Initiatives Demonstration Program. It will also include the Homeless/Innovations Challenge Grant. The Public Housing Operating Fund will consolidate the Public and Indian Housing Operating Subsidies. The Housing Certificate Fund consolidates the Section 8 Certificates, Section 8 vouchers, Section 8 Contract Renewals, Section 8 Family Unification, Section 8 for Persons with Disabilities, Section 8 for Persons with AIDS, Section 8 for Homeless, Section 8 Opt-Outs, Section 8 Counseling, Section 8 Pension Fund Certificates, Section 8 Veterans Affairs Supportive Housing, Section 8 Headquarters, Reserve, Lease Adjustments, and Family Self-Sufficiency Coordinators programs. Public Housing Authorities (PHAs) need to have scores of 90 or higher under Public Housing Management Assessment Program (PHMAP) and undertaken substantive efforts to link residents with educational, self-sufficiency intitiatives, or “Campus of Learners” activity. The bonus fund will be split among elegible PHAs based on the Caital Fund formula, and bonus funds may be used for any uses elegible under the Capital Fund. Any CDBG grantee that meets program requirements, meets or exceeds performance measures and benchmarks included in its Consolidated Plan, and demonstrates that it has expended grant funds on a timely basis. Funds are to address brownfields, generate economic revitalization in distressed communities, link people in these communities to jobs. Awards given on a competitive basis to high performing jurisdictions that propose innovative economic revitalization and job creation strategies using a combination of their own resources, private capital, and federal program incentives. Bonus funding is a “challenge grant” awarded on a competitive basis to high-performing jurisdiction that propose creative, cost-effective homeownership strategies using a combination of their own resources, private capital, and federal program incentives. Funds will be used to create Homeownership Zones to support state/local efforts to develop homeownership opportunities in targeted areas. Families earning up to 115 percent of the median income could be assisted. Bonus funding is to address the stated national priorities. Jurisdications need to propose creative strategies using a combination of their own resources, private capital, and federal program. Congressional Justification for 1997 Estimates, HUD, Part 1, April 1996. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the Department of Housing and Urban Development's (HUD) fiscal year 1997 budget request, focusing on: (1) HUD multifamily reengineering cost estimates; (2) proposed bonus pools for high-performing grantees who exceed established performance measures; and (3) HUD progress in addressing management deficiencies. GAO noted that: (1) HUD has requested about $22 billion in discretionary budget authority and plans about $33 billion in discretionary outlays; (2) overly optimistic cost control assumptions about the major restructuring of the multifamily housing program could affect the HUD budget request for rental assistance for low-income families; (3) HUD has requested $845 million in bonus funding for high-performing grantees in some of its new block grants; (4) implementing HUD performance funds will be complicated and time-consuming; and (5) HUD has proposed various internal controls to address management deficiencies.
Total expenditures in all U.S. elementary and secondary schools in school year 1993-94 reached an estimated $285 billion. Education is the largest single expenditure category in state budgets, accounting for about 20 percent of total state spending in fiscal year 1994. Elementary and secondary schools receive most of their funds from state and local revenues. Federal aid has mainly focused on providing services to educationally disadvantaged children through categorical, program-specific grants. In school year 1992-93, state and local shares of total education spending were roughly equal, estimated at 45.6 percent ($113 billion) for states and 47.4 percent ($118 billion) for local educational agencies (LEA). The federal share was 6.9 percent ($17 billion). Although most of the activities promoting equity in education take place within states, the federal role in supporting equity has been discussed since the 1960s, when concern was voiced over states’ inappropriate use of federal funds intended to improve equity on behalf of disadvantaged children. In summer 1993 the Senate held hearings on the federal role in school finance equalization. Subsequently, the Congress amended ESEA to further help disadvantaged children by improving the targeting of title I funds to local education agencies and schools with relatively high levels of poor students. Title I is a federal program that provides remedial education services to low-achieving students in high-poverty elementary and secondary schools. Title I funds are intended to supplement, not supplant, local and state education funding. Until ESEA was amended by the Improving America’s Schools Act (IASA) of 1994, title I grants to local education agencies were distributed under two formulas—the basic grant formula and the concentration grant formula. In an effort to increase the amount of aid going to the neediest children, between 1988 and 1994 the statute required that 10 percent of the appropriations to LEAs were to be distributed using the concentration grant formula. That formula generally allocated funds only to those LEAs in counties where eligible children equaled either at least 6,500 or 15 percent of the total population aged 5 through 17. The rest of the appropriation was distributed under the basic grant formula (which is based on numbers of poor school-age children multiplied by a cost factor reflecting a state’s per pupil spending). In 1994, the Congress sought to provide greater targeting of title I aid although opinions varied as to the best way of doing it. In the end, IASA made some technical changes to the basic and concentration grants and added two more title I funding streams—targeted grants and the Education Finance Incentive Program. The targeted grant formula may use money appropriated for title I in excess of the fiscal year 1995 level. Targeted grants are similar to basic grants except that poor and other children counted in the targeted formula are assigned weights based on the county’s or LEA’s child poverty rate and the number of poor school-age children. This formula generally reflects the recommendations for using weighted pupil formulas in title I made by the Commission on Chapter 1, the RAND Corporation, and GAO. Under the targeted grant formula, the higher the poverty rate or number of poor children in the county or LEA, the higher the title I grants per formula child. The 1994 reauthorization of title I included an effort and equity bonus—additional dollars—through a new Education Finance Incentive Program to encourage states to have more equitable education finance systems. Although part of title I, this program is funded separately. The 1994 reauthorization authorized $200 million for the Education Finance Incentive Program for fiscal year 1996. As of July 1996, however, no funding has been appropriated. (See app. I for more background information related to title I and appropriation issues.) The Education Finance Incentive Program defines effort as the ratio of state spending for elementary and secondary education per pupil to the per capita income of state residents. The effort factor, however, can be no less than 95 percent and no more than 105 percent of the national average. Per capita income is used as a proxy for a state’s ability to pay for education spending. Thus, the effort factor measures each state’s actual spending as a percentage of its ability to spend. The equity factor measures the variation in per pupil spending across a state’s districts divided by the state’s average per pupil spending. Additional weight is given to the number of poor pupils to reflect the higher cost of educating these children. The equity factor is subtracted from 1.3 for use in the allocation formula. One of the strengths of the effort factor used in the Education Finance Incentive Program is that it considers a state’s ability to pay when determining the level of effort. However, the current effort factor could be improved for three reasons. First, its measure of ability to pay—per capita personal income—is not as comprehensive as another that is available: total taxable resources (TTR). Second, because its measure of spending, which is per student, is related to a measure of ability to pay which is per capita, the factor penalizes states with high proportions of school-age children. Third, the effort factor does not adequately reward states for improving their level of effort over time. The effort factor’s measure of ability to pay is not as comprehensive as it could be because per capita income excludes many taxable resources that states are able to use for financing education. To the extent these additional sources of funding capacity are not equally available in all states, the per capita income measure overstates the funding capacity of some states and understates it for others. The current effort factor also penalizes states with high percentages of school-age children in their populations. It measures the ratio of state spending, expressed on a per pupil basis, and state funding capacity, expressed on a per capita basis. This calculation introduces the percentage of the state’s school-age population into the measure of state effort. The effect is to inappropriately penalize states with a high percentage of school-age children because the percentage of a state’s population that is school-age is unrelated to its level of effort. Moreover, the current effort factor could more effectively encourage states to increase their level of effort over time by (1) including a bonus based on rate of increase or decrease in effort over time and (2) eliminating the current requirement that the effort factor be at least 95 percent and no more than 105 percent of the average effort of all states. A fiscal incentive, or bonus, would reward states for increasing their level of educational effort over time, rather than rewarding only those that already have high effort. The 95-percent floor undermines the incentive for low-effort states to increase their effort in funding elementary and secondary education. Under current law, states with very low effort may increase their level of effort considerably, yet receive no additional dollars. Two of the three alternative effort factors we developed include a bonus based on the rate of increase or decrease in effort over time, in addition to other modifications we have made to address the potential drawbacks of the current effort factor (see app. III). One of these two alternatives also eliminates the requirement that the effort factor be at least 95 percent and no more than 105 percent of the average effort of all states. Table 1 compares the characteristics that define the current measure of effort in title I with the measures we propose as options A, B, and C. Option B is more comprehensive than option A because it considers both the current level of effort and the change in effort for each state over time. Option C is the same as option B, except that the effort factor is not constrained by the current requirement to be between 95 and 105 percent of the average effort. For a state-by-state breakout on each of these options, see appendix III. The equity definition in title I’s Education Finance Incentive Program contains two components. Of the two, the measure of spending disparities is more comprehensive than the measure of student needs. The measure of spending disparities is a good overall measure because it takes into account per pupil spending in all of each state’s school districts. The measure of student needs, however, only explicitly takes into account the greater needs of one type of pupil—those who are poor—in determining per pupil expenditures. Although the definition allows other types of higher needs students to be considered, such as students with disabilities or limited English proficiency, they are not explicity included. Table 2 compares the current equity measure and the five options we developed on the following characteristics: (1) the comprehensiveness of the measures of variation in education spending levels among districts in the state, that is, whether they include all districts and consider low-spending districts; (2) the ability of measures of student needs to take into account the cost differences of educating different target populations (students who are poor, have limited English proficiency, or have disabilities); (3) the inclusion of a comprehensive measure of purchasing power; (4) the inclusion of a direct incentive for states to improve their levels of equity over time; and (5) the presence of minimum and maximum limits. Of the five options we provide, E, G, and H are the more comprehensive. Each considers both the current level of equity a state has achieved and the recent progress the state has made toward achieving equity in education spending. To the extent possible within the limitations of the data currently available, we took into account differences in student needs related to numbers of students who were poor, had limited English proficiency, or had disabilities. Whether policymakers prefer option E or option G depends on their interest in measuring variation in spending levels for all school districts in the state (option E), or focusing on a state’s ability to bring low-spending school districts up to the median (option G). Option H is the same as option E, except that it does not contain the limits each of the other equity options we developed does. (For more information on each of these options, see app. IV.) We examined the demographic characteristics of states with higher levels of effort and equity under the current title I definition and under the new definitions we developed. We looked at the relationship between these effort and equity factors and (1) state median household income, (2) variations in median household income among districts, (3) state percentage of school-age children in poverty, and (4) variations in percentage of school-age children in poverty among districts. Under the current effort factor, we found that states with higher levels of child poverty rates do significantly less well than those states with lower levels of child poverty. There was no significant relationship between a state’s rate of child poverty and the options we developed; thus, the options we developed do not penalize high-poverty states the way the current effort factor does. For further discussion of these analyses, see appendix III. When we examined the correlation between the various equity factors and variability in median household income among districts, we found that the lower the variability in median household income across districts in the state, the higher the equity factor, and vice versa. The strength of the association was strongest, however, for the current equity factor and weakest for the three measures that considered improvement in equity over time—options E, G, and H (see app. IV, table IV.2). The current formula for title I’s Education Finance Incentive Program allocates funds to a state based on the effort factor and the equity factor multiplied by the state’s total number of school-age children, rather than the number of children in poverty—those who are the focus of all other title I allocation formulas. Consequently, some states could benefit from the Education Finance Incentive Program even though they do not have high levels of poverty. The alternative allocation formulas we developed not only use the alternative effort and equity factors we developed, but also propose using the number of children in poverty. Two of the four alternatives we present are based on the number of poor school-age children, rather than all school-age children. (For illustrative alternative allocation formulas using the effort and equity measures we developed, see app. V.) Eight of the 10 poorest states would receive greater funding using the two alternative formulas we propose that are based on children in poverty than they would under the most targeted of title I formulas, the targeted grant formula. The definitions of effort and equity in title I’s Education Finance Incentive Program could be improved in a number of ways. The definition of effort used in this program could be improved by (1) using a more comprehensive measure of ability to pay, (2) eliminating the bias against states with high proportions of school-age children, (3) providing a direct incentive for states to improve their level of effort over time, and (4) eliminating the lower limit for the effort factor. The definition of equity used in current law could be improved by (1) more fully considering differences in students’ needs among districts, (2) considering differences in purchasing power among a state’s districts, and (3) rewarding states for improving their level of equity in education spending, not just for already being equitable. The formula for allocating funds under this program would better target funds to states with higher proportions of children in poverty if it were based on the numbers of poor children rather than all school-age children. Should the Congress decide to fund title I’s Education Finance Incentive Program, it may want to improve the effort and equity measures and the way they are used in the allocation formula by considering the options we have presented in this report. Specifically, we believe that the Congress may wish to consider reducing the floor on the effort factor so that low-effort states are rewarded for increased effort; modifying the effort factor to eliminate the penalty on states where a high percentage of the population is school-age; using, in the effort factor, a more comprehensive measure of states’ revenue raising capacity, such as the total taxable resources indicator published by the Secretary of the Treasury; including in the effort and equity factors a bonus for improvement over time; expanding the needs component of the equity factor to include children with limited English proficiency and children with disabilities; adjusting the equity factor for differences in the cost of educational services across each state’s districts; and basing the allocation formula on the number of poor school-age children rather than all school-age children. The Department of Education provided written comments on a draft of this report (see app. VII). The Department expressed concern about our analysis of the impact of the Education Finance Incentive Program on the targeting of title I funds and whether the incentive formula can be expected to provide a meaningful incentive for states to change their school funding systems. The Department of Education was concerned that the Education Finance Incentive Program, even with the refinements we proposed, would tend to redistribute title I funds away from many higher poverty states and school districts. They stated that the Education Finance Incentive Program formula has a devastating impact on targeting because of a combination of factors, including (1) states with low fiscal effort tend to be high-poverty states with fewer resources, (2) the equity factor draws funds away from some high-poverty states while benefiting some low-poverty states, and (3) the incentive formula allocates funds based on the total number of school-age children rather than numbers of poor children. Regarding the Department’s first point (that states with low fiscal effort tend to be high-poverty states), we found that although this was true using the current effort measure, it is not true using the measures of effort we developed. Our analysis shows that while there is a significant negative correlation between the current effort measure and a state’s poverty level (that is, poorer states would get fewer dollars), this is not the case with each of the three effort options we developed. Contrary to the Department’s second point (that the equity factor draws funds away from some high-poverty states while benefiting some low-poverty states), as our report points out, we found no correlation between a state’s score on the current equity measure and a state’s poverty level. We did find, however, that states with high levels of variation in income levels and poverty rates across their districts did less well using the current equity measure than states with lower levels of variation; our equity measure options ameliorated this problem somewhat. With regard to the third point (that the incentive formula allocates funds based on the total number of school-age children rather than on the number of poor children as in the other title I formulas), we agree that this is true with the current formula. In our draft report, however, one of the three allocation alternatives we identified uses numbers of poor, rather than all, school-age children as a basis for allocating funds under the Education Finance Incentive Program. We have also added a fourth allocation alternative that uses numbers of poor children. Both of those allocation alternatives would target more dollars for poor states such as Alabama, Arkansas, Kentucky, Mississippi, New Mexico, South Carolina, Tennessee, and West Virginia—8 of the 10 poorest states—than would the targeted grant formula. The Department also stated that the report sidesteps the issue of whether the Education Finance Incentive Grant formula can be expected to provide a meaningful incentive for states to change their school funding systems. Although this issue was not the focus of our study, we state in our report that some experts question whether the level of funding that may or may not be appropriated for the Education Finance Incentive Program would be of sufficient size to have any effect on the plans of state or local educational agencies to provide greater levels of effort or equity (see app. I). We concur with the Department’s comment that the provisions restricting a state’s effort factor to between 95 and 105 percent of the national average result in weakening the incentive for states to increase their level of effort. In response to the Department’s comment, we developed another option for the effort measure that does not include these minimum and maximum constraints, and included this in our analysis. This fourth alternative allocation formula based on the number of poor school-age children also includes this unconstrained effort option as well as an unconstrained equity option. This allocation alternative, as previously noted, would result in targeting more dollars to 8 of the 10 highest poverty states than would the targeted grant formula. We are sending copies of this report to the Secretary of Education, appropriate congressional committees, and other interested parties. If you wish to discuss the contents of this report, please call me on (202) 512-7014 or Eleanor Johnson, Assistant Director, on (202) 512-7209. Major contributors to this report are listed in appendix VIII. Under title I, federal funds are authorized to school districts to provide supplementary educational services for low achievers in areas with children in poverty. As reauthorized by P.L. 103-382 in October 1994, these title I educational services may be financed by four funding formulas for this common purpose. The four funding formulas are for basic grants, concentration grants, targeted grants, and Education Finance Incentive Program grants. In fiscal year 1996, approximately $6.7 billion was appropriated for two of these funding formulas: basic grants and concentration grants. No funds have been appropriated for either targeted grants or the Education Finance Incentive Program for fiscal year 1996. Basic grants are generally allocated based on numbers of poor school-age children multiplied by a “cost factor”—a measure based on a state’s average per pupil spending. Concentration grants are based on numbers of school-age children in areas with high concentrations of poverty—where over 6,500 or 15 percent of the children are poor—and a measure of per pupil spending. Targeted grants provide an even greater focus on allocating funds to the highest poverty areas because they target the greatest per pupil funding to the areas with the highest poverty rates or numbers of children in poverty. Many complex policy and technical issues surround congressional policymakers’ decisions about whether to provide some title I funding through the Education Finance Incentive Program formula. It remains unclear whether additional title I funds will be appropriated in the near future and, if so, whether they will be made available for the two title I formulas created in the 1994 reauthorization: targeted grants and Education Finance Incentive Program grants. Funds appropriated in excess of the fiscal year 1995 title I appropriation may be spent for targeted grants. However, in part because the excess amounted to only 0.5 percent of the basic and concentration grant appropriations for fiscal year 1996, no funds were spent for targeted grants for fiscal year 1996. In addition, no funds were earmarked for the Education Finance Incentive Program, although $200 million was authorized for fiscal year 1996. If funding for title I increases in future years, it remains unclear whether the Congress will appropriate funds for targeted grants or the Education Finance Incentive Program. For fiscal year 1997, the Clinton administration has proposed funding targeted grants at $1 billion, while decreasing funds for basic grants by about $500 million. The proposal is intended to enhance the ability of the poorest communities to provide supplementary instructional services to disadvantaged students. The administration proposes not to fund the Education Finance Incentive Program for fiscal year 1997 because “the formula would reward states that make a high effort and are highly equalized, but it would not consistently target funds on states with high concentrations of poor children.” In addition, some experts question whether the level of funding that may or may not be appropriated for the Education Finance Incentive Program would be of sufficient size to have any effect on the plans of state or local educational agencies to provide greater levels of effort or equity. Those who support funding the Education Finance Incentive Program may note that title I funds are seen as supplementing a basic level of state and local funding for instructional services to compensate for the additional educational needs that accompany concentrations of poverty. If, however, there are spending disparities among a state’s school districts, title I funding “may only help make up some of the gap in resources available to disadvantaged children compared to those received by the advantaged.” To the extent that a state’s spending for education increases and spending disparities among a state’s districts decrease, the federal government is able to more effectively target funds to provide truly supplemental educational resources to disadvantaged children. If funds are appropriated for the Education Finance Incentive Program in the future, under current law each state’s share would be determined by the formula in figure I.1. (1.3 Equity Factor) Throughout the rest of this report, we will use the term “equity factor” to refer to the “1.3 minus the equity factor.” Funds would then be allocated to districts within each state based on their share of the total of other title I funds for school districts: basic grants, concentration grants, and targeted grants. Each state is to be allotted at least 0.25 percent of the total appropriation. The effort and equity factors in the Education Finance Incentive Program are intended to provide additional dollars to those states that have relatively high fiscal effort (in order to provide adequate levels of funding for education) and those states with relatively low disparities in per pupil funding across districts. The Education Finance Incentive Program defines effort using two components: state spending for education and state ability to pay (see app. III). The measure for state spending for education is the state’s average per pupil expenditure for public elementary and secondary education. The measure of the state’s ability to pay for services is the state’s average per capita income. Those states with high effort, that is, high state spending relative to their ability to pay, are rewarded for this under the Education Finance Incentive Program. A state’s level of effort is compared with that for the nation as a whole to develop an “effort factor” to be used in the Education Finance Incentive Program formula. If the index is 1.00, the state’s level of spending for education, relative to its per capita income, is the same as it is for the nation as a whole. Those states with spending for education relative to their ability to pay that is greater than that for the nation receive a factor higher than 1.00; those with spending relative to ability to pay that is lower than that for the nation receive a factor lower than 1.00. No state, however, may have an effort factor higher than 1.05 or lower than 0.95. Limiting the range from 0.95 to 1.05 limits the degree to which states receive a smaller share because of their much lower effort or a larger share because of their much higher level of effort. The definition of equity used in title I’s Education Finance Incentive Program includes two components: a measure of spending disparities and a measure of student needs. The law also contains a number of other adjustments that take into account complexities arising from various types of school districts (for example, elementary, secondary, and unified), extremely small school districts, and other factors (see app. IV). The first component measures the level of disparity in current per pupil expenditures across the state’s school districts. There are a variety of ways to measure spending disparities; title I uses the coefficient of variation (COV). The COV is the standard deviation (a common statistical measure of variation) in spending per student among all districts within the state, divided by the average level of spending per student in the state. The second component is a partial accounting, through use of a weighting factor, for differences in student needs across districts within the state; it specifically focuses on differences in the number of poor students. Students with no need for additional services are weighted by a factor of 1.0; poor students are weighted by a factor of 1.4. Such a weighting system recognizes that it may not be best to have equal spending per pupil across districts if the needs of the children in those districts are not equal. For example, additional local, state, and federal dollars might be targeted to districts with high concentrations of poor children to provide services to compensate for their greater educational needs. The equity factor is then constructed by subtracting the measure of spending disparities—the COV adjusted by a weighting component for poor students—from 1.3. For most states, the equity factor ranges from 1.2 to 1.0. One state, Hawaii, has only one school district and, therefore, no variation in spending, so it receives an equity factor of 1.3. For similar reasons, Washington, D.C., and Puerto Rico also receive an equity factor of 1.3. Current law also includes a provision that those states that meet the disparity standard under Impact Aid—Alaska, Kansas, and New Mexico—receive an equity factor of 1.2. We reviewed the formula used in title I’s Education Finance Incentive Program, focusing on the components of the formula related to effort and equity, which the Congress refers to as effort and equity factors. Our study was designed to answer the following questions: (1) How can the current effort factor be improved? (2) How can the current equity factor be improved? (3) How are state demographic characteristics, such as poverty rate or median income, related to state scores on the effort and equity factors? (4) How can the options we developed be used in alternative ways to allocate funds under the Education Finance Incentive Program? To answer these questions, we analyzed the strengths and weaknesses of title I’s effort and equity factors using criteria we developed after reviewing the literature and consulting with experts. On the basis of our review of the literature and consultation with experts, we developed alternative effort factors for possible use in the Education Finance Incentive Program using a universe sample of school district data from the Department of Education’s National Center for Education Statistics (NCES) for school year 1991-92. School district spending data were collected by the Bureau of the Census for NCES. We also developed alternative equity factors that include two different measures of spending disparities and consider differences in purchasing power across a state’s school districts. See appendixes III and IV for a description of the methods used to produce the alternatives for the effort and equity factors for title I’s Education Finance Incentive Program. To the extent possible within the limitations of the data currently available, we took into account differences in student needs related to numbers of students who were poor, had limited English proficiency, or had disabilities. We also developed illustrative state allocations under current and alternative title I Education Finance Incentive Program formulas as well as under the targeted grant formula (see app. V). The effort factor in the Education Finance Incentive Program provides additional title I aid to those states with education spending relative to their ability to pay that is higher than other states. Current law defines this factor as the state’s average per pupil expenditure divided by the state’s average per capita income relative to that for the nation as a whole. In the current law, averages are determined using 3 years of data to minimize the effect of changes from year to year. However, no state’s effort factor can be less than 95 percent of the nation’s average or more than 105 percent. Under P.L. 103-382, funds are to be allocated to states based on the state’s number of children aged 5 through 17 multiplied by both the effort factor and the equity factor. Each state is to be allotted at least 0.25 percent of the total appropriation. The effort factor in the current law, which considers states’ education spending relative to their ability to pay, could be improved. First, the measure of ability to pay used in the law, per capita income, is not as comprehensive as the one we used: total taxable resources (TTR). Second, the current effort factor penalizes states with high proportions of school-age children. Finally, the existing effort factor could more directly reward states for increasing their level of effort, not just for having high effort. We developed three alternative effort factors for title I’s Education Finance Incentive Program using TTR as a measure of a state’s ability to finance services. TTR, defined and compiled by the Department of the Treasury, considers both personal income and the gross state product for each state. TTR takes into account all income produced within a state, whether received by residents or nonresidents, or retained by business corporations. TTR is a more comprehensive indicator of taxable resources than personal income alone, in part because it also considers income produced in a state but received by nonresidents. In our alternative effort factors, we consider both spending and ability to pay per student, rather than spending per student and ability to pay per capita, as the current law does. The ability to pay per student is a better measure of a state’s ability to finance educational services for students than ability to pay per capita. Moreover, using ability to pay per capita, rather than per student, results in lower Education Finance Incentive grants for states with high proportions of school-age children—those states usually intended to benefit from education-related grants. In one of the alternatives we developed, option B, we increase the level of the reward to states that increase their level of effort over time. For example, two states may have the same level of effort in the current year. But while one state increased its level of effort from the previous year, the other state decreased its level of effort. In the first of the alternatives we present, option A, these two states would have the same effort factor because they each currently have the same level of effort—as would happen under current law. In the second alternative, option B, the state that increased its level of effort from the previous year would have a higher score than the state that decreased its level of effort. Option C is similar to B but without the limits of 0.95 and 1.05, so that low-effort states would have a greater incentive to increase their effort. The details of the three alternative effort factors we developed are as follows. The first alternative, which we refer to as option A, is based on state and local funding for elementary and secondary public education (kindergarten through grade 12) divided by the state’s TTR. To develop an index, we divided the state figure by a comparable figure for the nation as a whole. This effort index, as is the case for the current law index, is limited to no less than 0.95 and no more than 1.05. Option A is based on data for school year 1992-93. The second alternative, which we refer to as option B, is an index that also considers the rate of change in effort over time. We determined the rate of change in effort over time by comparing option A with a similar effort factor for the previous school year, 1991-92. For example, if the state’s effort factor was 1.01 under option A, and it had improved by 3 percent from the previous year, under option B its effort factor would be 1.01 multiplied by 1.03 (1 plus 0.03), or 1.04. In contrast, if the state’s effort factor was 1.01 under option A, but its effort had decreased by 3 percent from the previous year, under option B its effort factor would be 1.01 multiplied by 0.97 (1 minus 0.03) or 0.98. Again, the factors were limited within the bounds of 0.95 and 1.05. The third option we developed, option C, is similar to option B in that it provides an additional incentive for change over time. In addition, option C eliminates the lower limit of 0.95 and the upper limit of 1.05. We allowed only half of the index to vary, however, because otherwise the gap between the extremes would be too wide. In other words, option C equals 0.5 multiplied by an unconstrained effort factor, plus a constant of 0.5. Under option C, the effort factor would range from a low of 0.72 for the District of Columbia to a high of 1.21 for West Virginia. Table III.1 compares current and alternative effort factors. (continued) We examined the correlations between effort and states’ demographic characteristics such as state median income, within-state deviations in median household income, state percentage of school-age children in poverty, and within-state deviations in percentage of school-age children in poverty. We examined the extent to which title I’s current effort factor and the alternative effort factors we developed were related to these demographic factors. We found that states with higher levels of child poverty rates do significantly less well with the current effort factor than those states with lower levels of child poverty. However, there was no significant relationship between a state’s rate of child poverty and the options we developed. Thus, the options we developed do not penalize high-poverty states the way the current effort factor does. When we examined the correlation between the various effort factors and variability in median household income across a state’s districts, we found that the variable “within-state deviations in median household income” was positively correlated with the current effort factor. That is, the higher the variability in median household income across districts in the state, the higher the current effort factor, and vice versa. There was no significant correlation, however, between this variable and the options we developed (see table III.2). Moreover, neither the current effort factor nor the options we developed were significantly correlated with state median household income or within state deviations in child poverty levels. The equity factor of the Education Finance Incentive Program formula provides that additional title I aid go to those states with relatively low disparities in per pupil expenditures among local educational agencies (LEA). The particular statistical measure used to determine the level of spending disparities in each state under current law is the coefficient of variation (COV). The COV is defined as the standard deviation divided by the mean. Assuming a normal distribution, approximately two-thirds of all school districts will fall within one standard deviation of the mean, or average. For example, if the state’s average per pupil spending is $6,000 and the standard deviation is $2,000, then approximately two-thirds of districts in the state will be spending between $4,000 and $8,000 per pupil. However, average spending levels vary greatly from state to state. To provide fair comparisons across the states, each state’s standard deviation is divided by its average spending level. Using this example, the COV is $2,000 divided by $6,000 or 0.33. In addition to defining the measure of equity as the COV, the law specifies that the COV should be weighted by the number of poor children in each school district for each state. The Congressional Research Service has pointed out that “the effect of the additional weighting for poor children is that expenditure disparities in favor of LEAs with relatively large numbers of poor children would reduce a State’s measured COV, while expenditure disparities in favor of LEAs with relatively low numbers of poor children would increase a State’s COV.” To provide a simplified example of how weighting might work, assume a state has two school districts—one with only poor students and another with no poor students. Also assume that the per pupil spending is the same for the two districts, except that 40 percent more funding per poor student is provided to fund additional services. Such a state would appear to have a significant spending disparity. However, a weighted COV takes into account the differences in student needs. In the current law, the 1.4 weighting per poor student stems, in part, from title I, which authorizes an additional 40 percent in per pupil spending to provide services to educationally disadvantaged children in high poverty areas. However, appropriations for title I have generally been less than half of what is authorized. As a result, in a recent study, researchers used 1.2 as a weighting for poor students. The law also contains a number of other adjustments that take into account complexities arising from various types of LEAs (for example, elementary, secondary, and unified), extremely small LEAs, and other factors. Under current law, separate COVs are used for elementary, secondary, and unified school districts. A statewide COV is then determined by calculating a weighted average based on the number of weighted students (that is, counting poor students as 1.4) for each type of district occurring within the state. Some states have only unified school districts serving students from kindergarten through grade 12; other states have elementary, secondary, and unified school districts. The law excludes spending in extremely small school districts, such as those in remote areas, because spending in these school districts may be atypical. Our work is based on statistical analyses of fiscal and demographic data from the nation’s school districts for school years 1991-92 and 1989-90, the latest years for which expenditure data for the universe of school districts were available. In determining alternative equity factors, we also treated the various types of school districts—elementary, secondary, and unified—separately and computed a statewide weighted average. As is the case under current law, we excluded school districts with fewer than 200 students enrolled and districts that reported they had no schools. We also excluded districts with expenditures that were likely to be atypical, such as those devoted primarily to vocational or special education. In examining expenditure equity, we used total current expenditures, which do not include expenditures for debt services or capital outlay. Data limitations include an underreporting of the number of children who have limited English proficiency and those with disabilities. Although NCES data are sufficient for the purposes of illustrating alternatives to the current effort and equity factors, NCES may want to address a number of cases of missing data or irregularities in the school district database. The data available to us on school district finances and numbers of special needs children by school district were compiled by NCES and the Bureau of the Census. Data on numbers of children with limited English proficiency came from parents’ reports to the Bureau of the Census about whether their children speak English “not well” or “not at all.” In addition, five states in school year 1991-92 and eight states in school year 1989-90 did not provide numbers of children with disabilities. In developing an equity factor for these states without data, we were not able to take into account differences among school districts in the number of children with disabilities. To improve the current measures of equity used in the Education Finance Incentive Program, we examined (1) the comprehensiveness of the various measures of equity available, (2) the comprehensiveness of the measures accounting for differences in student needs across districts, (3) the effect of including a measure of purchasing power across districts, and (4) the effect of including a direct incentive for states to improve their levels of equity over time. We reviewed literature related to measures of school finance equity, most particularly a set of expert papers prepared for the Department of Education in 1992 to evaluate whether the measures of equity used in the Department’s Impact Aid program could be improved. Although focused on Impact Aid (see app. VI), these papers informed the discussion related to equity measures used in title I as well. Some of these experts generally agreed that an equity measure would be better if it took into account a large portion of each state’s school districts in determining the disparity in per pupil spending across the state, as the COV does. Robert Berne and Leanna Stiefel also suggested that another measure of spending disparities be used—the McLoone Index. The COV takes into account per pupil spending in all of each state’s school districts and, therefore, is a comprehensive measure of equity. Another measure of equity, the McLoone Index, focuses on equity for school districts that spend less than the median. This index is the ratio of the sum of expenditures for districts below the median to what the expenditures in these districts would be if they were able to spend at the median level per pupil. Where per pupil expenditures are equal for all the districts in the state that are at or below the median, the McLoone Index is 1.0. Experts suggested that taking into account student needs would improve current measures of equity. For example, if one district in a state has many pupils that are poor, have limited English proficiency, or otherwise need special educational services, it may be appropriate for the state to provide more aid for that district than for districts without high proportions of these at-risk pupils. Therefore, it may be necessary to make adjustments to consider that one district has greater student needs by weighting its students according to their need for additional services. If, for example, the cost of educating a student with limited English proficiency is, on average, 20 percent more than the cost of educating a student without additional needs, these students would then be weighted 1.2. If the district were able to spend at the level needed to cover these additional services, its expenditures per weighted pupil would show that it was spending at a level comparable with districts with fewer students with additional needs. We chose to use a set of weights developed for the NCES report, Disparities in Public School District Spending: 1989-90, that takes into account differences in student needs across school districts. The researchers assigned students with disabilities a weight of 2.3 because the cost of educating such children is generally 2.3 times the cost of educating children who do not need special educational services, although the cost of educating children with specific types of disabilities varies widely. The report used weights of 1.2 for children from poor families or those who have limited English proficiency. This additional 0.2 weighting for students in poverty stems from an estimate based on the average title I allocation per student divided by average revenues per student. The rationale for using a weight of 1.2 for children with limited English proficiency is based on an expectation that they will need additional educational services, comparable with those for poor children, although school districts may generally spend less than this currently. We also consulted with the Congressional Research Service on the issue of whether to use 1.2 or 1.4 as a weight for the number of poor children and those with limited English proficiency. While the current equity measure uses the higher weight of 1.4 to adjust for the greater needs of poor children, it does not adjust for the greater needs of other students needing additional services, such as students with disabilities or limited English proficiency. But because we were taking into account the additional costs associated with educating these students and because there may be some double counting, that is, students may be weighted twice if they are both poor and have limited English proficiency (or have disabilities and limited English proficiency), we decided the weight of 1.2 was more appropriate in this case. More precise estimates are not available on the cost of educating students who may have multiple types of special needs; moreover, data currently available do not allow us to estimate numbers of such children by school district. We believe that adjusting for differences in purchasing power across a state’s districts is useful in providing more comparable measures of spending levels, or spending disparities, across districts. For example, district A may be able to hire teachers of the same quality at 80 percent of the cost of district B because district A may be in a part of the state that offers lower housing costs, greater availability of desirable services, or better weather. If each district spends $4,000 per pupil, district B will not be able to provide the same level of services to its students as district A. Therefore, to provide greater comparability, we adjusted the spending levels of the various districts to take into account differences in purchasing power reflected in the cost of hiring and retaining teachers. We used a teacher cost index recently developed for the National Center for Education Statistics. While an index that examines differences in the cost of living is also available by district, we believe that the NCES teacher cost index is better suited to our purpose of providing comparability across districts because it considers the purchasing power of districts in determining personnel-related costs, a major cost to school districts. Our focus is on the district’s ability to provide comparable educational services to its students, rather than on whether teachers’ salaries are adequate given the cost of living in their area. Not all costs, however, vary within the state. For example, the costs of books, instructional materials, and other supplies and equipment tend to vary little within a state or, for some items, the nation. Therefore, we used the teacher cost index to adjust only the portion of expenditures generally estimated to be related to personnel costs. We used an estimate developed by Stephen Barro for NCES; he calculated that 84.8 percent of total current expenditures are personnel costs, including salaries, fringe benefits, and some purchased services. In two of the five alternative measures of equity we developed, options D and F, states are rewarded solely on the basis of their current level of equity, as is the case under current law. Another way to compare states is to include a measure of whether and to what extent states have improved their level of equity in recent years. Our three other alternative measures of equity, options E, G, and H, take into account rate of change in the level of equity from school year 1989-90 to school year 1991-92, the most recent comprehensive data available on school district level finances. Our first alternative, option D, uses a COV, as the current title I equity measure does. In addition, option D takes into account the needs of students with limited English proficiency or disabilities as well as those who are poor. We used weights of 1.2 for poor students and those with limited English proficiency, and 2.3 for students with disabilities, rather than the weighting of 1.4 for only poor students as in the current measure of equity. We also used a teacher cost index to adjust for purchasing power differences across school districts. Like the current measure, we subtracted this adjusted COV from 1.3 so that the two measures are comparable. We limited this measure, along with three of the other four we developed, so that no state’s measure is less than 0.95 or more than 1.3. (The current equity factor presently ranges from 0.99 to 1.3; see table IV.1.) This limitation affected few states and resulted in relatively minor changes. Option E is a variation of option D that takes into account a state’s improvement in equity over time, from school year 1989-90 to school year 1991-92. First we calculated a factor similar to option D, using 1989-90 data, and determined the rate of change from 1989-90 to 1991-92. We then multiplied option D by 1 plus the rate of change. Thus, for example, if the state’s equity factor improved by 3 percent over that time, option E would yield a 3-percent increase in the factor over option D. If a state’s level of equity decreased by 3 percent, option E would be 3 percent lower than option D. For example, if a state currently had an equity factor of 1.10 under option D and it improved by 3 percent, option E would be equal to 1.10 multiplied by 1.03, or 1.13. If, instead, it decreased by 3 percent over this time, option E would be equal to 1.10 multiplied by 0.97, or 1.07. We also developed two options, F and G, based on the McLoone Index, which measures the extent to which the state brings up the expenditures of those districts spending below the median. Option F is based on the McLoone Index and includes the same adjustments for differences in student needs and purchasing power across a state’s school districts. Under the current law, states that are fully equalized receive a score of 1.30; states that are fully equalized using the McLoone Index receive a score of 1.00. To ensure that the equity factors we developed were comparable with the current equity factor, option F is equal to the adjusted McLoone Index plus 0.30. (The 0.30 is determined by subtracting 1.00 from 1.30.) In this way, states that are fully equalized would receive a score of 1.30, just as they currently do under the existing equity factor. We also developed an equity factor based on the McLoone Index that takes into account the rate of change in equity over time, which we refer to as option G. We used the option F method to calculate indexes using data for school years 1989-90 and 1991-92. Again, states that increase their level of equity over time receive an increase in their equity factor under option G, while those whose level of equity declines receive a lower score under option G than option F. As noted earlier, options D through G include limits such that the equity factor cannot drop below 0.95 or rise above 1.30; the use of these limits affected few states. We also developed option H, which is identical to option E except that it uses no limits. As shown in table IV.1, the equity factors for options E and H are identical except for four states: California, Louisiana, New York, and Rhode Island. (continued) We examined the correlations between equity and states’ demographic characteristics, such as state median income, within-state deviations in median household income, state percentage of school-age children in poverty, and within-state deviations in percentage of school-age children in poverty. We examined the extent to which title I’s current equity factor and the alternative equity factors we developed were related to these demographic factors. When we examined the correlation between the various equity factors and variability in median household income across a state’s districts, we found that the variable “within state deviations in median household income” was negatively correlated with each equity factor. That is, the lower the variability in median household income across districts in the state, the higher the equity factor, and vice versa. The association was strongest, however, for the current equity factor and weakest for the three alternatives that consider improvement in equity over time—options E, G, and H (see table IV.2). Each of the equity factors we developed, however, as well as the current one, was negatively correlated with the variable “within-state deviations in percentage of school-age children in poverty.” We found no correlation between a state’s percentage of school-age children in poverty and the five alternative equity factors we developed; similarly, there was no correlation for the current equity factor. In addition, there was no significant correlation between a state’s median household income and the current or alternative equity factors. To show how improved measures might be used, this appendix provides illustrative allocations using the current formula for the Education Finance Incentive Program along with alternative formulas we developed; we also provide a state-by-state estimate of allocations under the targeted grant formula for comparison purposes. While NCES data are sufficient for the purposes of illustrating the general effects of changes in the effort and equity factors, NCES may want to address a number of cases of missing data or irregularities in the school district databases. As noted earlier, if funds were appropriated for the Education Finance Incentive Program, each state’s grant would be determined by the formula in figure V.1. We noted earlier that two of the effort factor options we developed, options B and C, have several benefits compared with the current effort factor. For example, options B and C (1) are more comprehensive, (2) are not biased against states with high proportions of school-age children, and (3) include an incentive for states that improve their level of effort over time. Equity options E, G, and H also have several benefits compared with the current equity factor: (1) they consider the additional education costs related to numbers of students with limited English proficiency or disabilities, in addition to poor students; (2) they consider differences in purchasing power across districts; and (3) they include a bonus for states that become more equitable over time. Preferences for options E or H versus option G depend on interest in measuring variation in spending levels for all school districts in the state (options E or H) or focusing on a state’s ability to bring low-spending school districts up to the median (option G). Figure V.2 shows the three illustrative alternative formulas based, in part, on the alternative effort and equity factors we developed. Table V.I: Illustrative State Allocations Under Current and Alternative Title I Formulas—Education Finance Incentive Program and Targeted Grants (continued) (Table notes on next page) In addition to an overview of the Impact Aid program and the way that equalization is defined for this program, this appendix discusses some of the strengths and weaknesses of that definition. The Impact Aid program is intended to compensate school districts for either a loss of tax revenues, because federal property is tax exempt, or increased expenditures because of federal activity, for example, the cost of educating the children of military personnel. Under the Impact Aid program, if a state meets a certain equalization level, it may reduce state aid payments to offset the Impact Aid received by school districts. In this way, these states can ensure that Impact Aid funds will not contribute to creating greater inequalities within the state. If the state does not pass Impact Aid’s test of equalization, it may not consider federal Impact Aid payments to its localities in determining state aid (which would be likely to result in decreasing state aid to those districts) because the Impact Aid payments are meant for localities, not states. Prior to the reauthorization of the Impact Aid program in 1994, the Department of Education asked education finance experts to examine the way that equalization is determined in the program. These experts suggested, among other things, two improvements: (1) use measures of spending disparities that are more comprehensive than the Federal Range Ratio, and (2) consider differences in purchasing power and student needs more systematically. Both issues are still relevant. The measure currently used to determine the level of equalization in per pupil spending in the Impact Aid program, the Federal Range Ratio, is the difference in per pupil expenditures between two districts—a high-spending district (95th percentile) and a low-spending district (5th percentile)—divided by the per pupil expenditure of the low-spending district. One major drawback of this measure is that it does not consider spending in a majority of each state’s school districts; it only considers spending in two of the state’s school districts (at the 95th and 5th percentiles). Consequently, two states with fairly different spending patterns may have similar Federal Range Ratio scores. For example, one state may have per pupil spending clustered around the average spending level with little variation between the two extremes of the 95th and 5th percentiles, while another state may have per pupil spending that varies greatly between these two points. Also, the Impact Aid program’s system for determining a state’s level of equalization does not consider the additional funds states may provide to high-need areas. For example, some states provide additional funds to take into account the greater needs of some types of students (such as those who are poor or have limited English proficiency or disabilities) or some types of districts (such as those in sparsely populated areas). On the one hand, not including such funds is a strength of the way that equalization is measured because states that contribute additional funds to high-need areas are not penalized for these greater spending disparities. On the other hand, the overall method of determining equalization is weakened by not considering such funds (and making related adjustments) because it may not adequately take into account the circumstances of districts in high-need areas. A number of implications arise from an analysis of the current method for determining equalization under the Impact Aid program. First, the measure of spending disparities may be misleading. Second, the way in which states treat districts in high-need areas is not fully addressed. Third, few states qualify as equalized under this measure. Fourth, to the extent that Impact Aid may allow some districts to be compensated twice for the “impact” of a federal presence—once by the federal government and once by the state government—it may actually contribute to creating less, rather than more, equalization in the state. Robert Dinkelmeyer, Senior Analyst; Wayne Dow, Supervisory Operations Research Analyst; Jerry Fastrup, Supervisory Economist; and Deborah McCormick, Senior Social Science Analyst, provided technical advice regarding statistical and school finance issues. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the measures of equity and effort included in the Title I Education Finance Incentive Program, focusing on: (1) several options for improving these measures; (2) the characteristics of states with higher levels of equity and effort; and (3) alternative ways of using these measures in allocating title I funds. GAO found that: (1) the Education Finance Incentive Program's definition of effort does not include a comprehensive measure of states' funding capacity, penalizes states with a high proportion of school-age children, and provides no incentive for low-effort states to increase their effort in funding elementary and secondary education; (2) the title I definition of equity excludes a comprehensive measure of the differences in students' needs among school districts and the differences in purchasing power among districts; (3) adjustments to the effort factor should include using states' total taxable resources in establishing the states' education funding capacity and eliminating the bias against low-effort states; (4) adjustments to the equity factor should include rewarding states for increasing their level of equity in education spending and considering more fully the differences in educating higher-needs students; and (5) the allocation of title I funds should be based on the number of children in poverty rather than on the states' total number of school-age children.
The H-1B program enables companies in the United States to hire foreign workers for work in specialty occupations on a temporary basis. A specialty occupation is defined as one requiring theoretical and practical application of a body of highly specialized knowledge and the attainment of a bachelor’s degree or higher (or its equivalent) in the field of specialty. The law originally capped the number of H-1B visas at 65,000 per year; the cap was raised twice pursuant to legislation, but in fiscal year 2004, the cap reverted to its original level of 65,000. Statutory changes also allowed for certain categories of individuals and companies to be exempt from or to receive special treatment under the cap. The American Competitiveness in the Twenty-First Century Act of 2000 exempted from the cap all individuals being hired by institutions of higher education and also nonprofit and government-research organizations. More recently, the H-1B Visa Reform Act of 2004 allowed for an additional 20,000 visas each year for foreign workers holding a master’s degree or higher from an American institution of higher education to be exempted from the numerical cap limitation. In 2004, consistent with free trade agreements, up to 6,800 of the 65,000 H-1B visas may be set aside for workers from Chile and Singapore. While the H-1B visa is not considered a permanent visa, H-1B workers can apply for extensions and pursue permanent residence in the United States. Initial petitions are those filed for a foreign national’s first-time employment as an H-1B worker and are valid for a period of up to 3 years. Generally, initial petitions are counted against the annual cap. Extensions—technically referred to as continuing employment petitions— may be filed to extend the initial petitions for up to an additional 3 years. Extensions do not count against the cap. While working under an H-1B visa, a worker may apply for legal permanent residence in the United States. After filing an application for permanent residence, H-1B workers are generally eligible to obtain additional 1-year visa extensions until their U.S. Permanent Resident Cards, commonly referred to as “green cards,” are issued. The Departments of Labor (Labor), Homeland Security (Homeland Security), and State (State) each play a role in administering the application process for an H-1B visa. Labor’s Employment and Training Administration (Employment and Training) receives and approves an initial application, known as the Labor Condition Application (LCA), from employers. The LCA, which Labor reviews as part of the application process, requires employers to make various attestations designed to protect the jobs of domestic workers and the rights and working conditions of temporary workers. Homeland Security’s U.S. Citizenship and Immigration Services (USCIS) reviews an additional employer application, known as the I-129 petition, and ultimately approves H-1B visa petitions. For prospective H-1B workers residing outside the United States, State interviews approved applicants and compares information obtained during the interview against each individual’s visa application and supporting documents, and ultimately issues the visa. For prospective H-1B workers already residing in the United States, USCIS updates the workers’ visa status without involvement from State. USCIS has primary responsibility for administering the H-1B cap. Generally, it accepts H-1B petitions in the order in which they are received. However, for those years in which USCIS anticipates that the number of I-129 petitions filed will exceed the cap, USCIS holds a “lottery” to determine which of the petitions will be accepted for review. For the lottery, USCIS uses a computer-generated random selection process to select the number of petitions necessary to reach the cap. With regard to enforcement, Labor, the Department of Justice (Justice), and Homeland Security each have specific responsibilities. Labor’s Wage and Hour Division (Wage and Hour) is responsible for enforcing program rules by investigating complaints made against employers by H-1B workers or their representatives and assessing penalties when employers are not in compliance with the requirements of the program. Justice is responsible for investigating complaints made by U.S. workers who allege that they have been displaced or otherwise harmed by the H-1B visa program. Finally, USCIS’s Directorate of Fraud Detection and National Security (FDNS) collaborates with its Immigration and Customs Enforcement Office to investigate fraud and abuse in the program. Over the past decade, demand for H-1B workers tended to exceed the cap, as measured by the number of initial petitions submitted by employers, one of several proxies used to measure demand since a precise measure does not exist. As shown in figure 1, from 2000 to 2009, initial petitions for new H-1B workers submitted by employers who are subject to the cap exceeded the cap in all but 3 fiscal years. However, the number of initial petitions subject to the cap is likely to be an underestimate of demand since, once the cap has been reached, employers subject to the cap may stop submitting petitions and Homeland Security stops accepting petitions. If initial petitions submitted by employers exempt from the cap are also included in this measure (also shown in figure 1), the demand for new H- 1B workers is even higher, since over 14 percent of all initial petitions across the decade were submitted by employers who are not subject to the cap. In addition to initial requests for H-1B workers, employers requested an average of 148,000 visa extensions per year, for an average of over 280,000 annual requests for H-1B workers. Over the decade, the majority (over 68 percent) of employers were approved to hire only one H-1B worker, while fewer than 1 percent of employers were approved to hire almost 30 percent of all H-1B workers. Among these latter employers are those that function as “staffing companies” that contract out H-1B workers to other companies. The prevalence of such companies participating in the H-1B visa program is difficult to determine. There are no disclosure requirements and Homeland Security does not track such information. However, using publicly available data, we learned that at least 10 of the top 85 H-1B-hiring employers in fiscal year 2009 participated in staffing arrangements, of which at least 6 have headquarters or operations located in India. Together, in fiscal year 2009, these 10 employers garnered nearly 11,456 approvals, or about 6 percent of all H-1B approvals. Further, 3 of these employers were among the top 5 H-1B-hiring companies, receiving 8,431 approvals among them. To better understand the impact of the H-1B program and cap on H-1B employers, GAO spoke with 34 companies across a range of industries about how the H-1B program affects their research and development (R&D) activities, their decisions about whether to locate work overseas, and their costs of doing business. Although several firms reported that their H-1B workers were essential to conducting R&D within the U.S., most companies we interviewed said that the H-1B cap had little effect on their R&D or decisions to locate work offshore. Instead, they cited other reasons to expand overseas including access to pools of skilled labor abroad, the pursuit of new markets, the cost of labor, access to a workforce in a variety of time zones, language and culture, and tax law. The exception to this came from executives at some information technology services companies, two of which rely heavily on the H-1B program. Some of these executives reported that they had either opened an offshore location to access labor from overseas or were considering doing so as result of the H-1B cap or changes in the administration of the H-1B program. Many employers we interviewed cited costs and burdens associated with the H-1B cap and program. The majority of the firms we spoke with had H- 1B petitions denied due to the cap in years when the cap was reached early in the filing season. In these years, the firms did not know which, if any, of their H-1B candidates would obtain a visa, and several firms said that this created uncertainty that interfered with both project planning and candidate recruitment. In these instances, most large firms we interviewed reported finding other (sometimes more costly) ways to hire their preferred job candidates. For example, several large firms we spoke with were able to hire their preferred candidates in an overseas office temporarily, later bringing the candidate into the United States, sometimes on a different type of visa. On the other hand, small firms were sometimes unable to afford these options, and were more likely to fill their positions with different candidates, which they said resulted in delays and sometimes economic losses, particularly for firms in rapidly changing technology fields. Interviewed employers also cited costs with the adjudication and lottery process and suggested a variety of reforms: The majority of the 34 firms we spoke with maintained that the review and adjudication process had become increasingly burdensome in recent years, citing large amounts of paperwork required as part of the adjudication process. Some experts we interviewed suggested that to minimize paperwork and costs, USCIS should create a risk-based adjudication process that would permit employers with a strong track- record of regulatory compliance in the H-1B program to access a streamlined process for petition approval. In addition, several industry representatives told us that because the lottery process does not allow employers to rank their top choices, firms do not necessarily receive approval for the most desired H-1B candidates. Some experts suggested revising the system to permit employers to rank their applications so that they are able to hire the best qualified worker for the job in highest need. Finally, entrepreneurs and venture capital firms we interviewed said that program rules can inhibit many emerging technology companies and other small firms from using the H-1B program to bring in the talent they need, constraining the ability of these companies to grow and innovate in the United States. Some suggested that, to promote the ability of entrepreneurs to start businesses in the United States, Congress should consider creating a visa category for entrepreneurs, available to persons with U.S. venture backing. In our report, we recommended that USCIS should, to the extent permitted by its existing statutory authority, explore options for increasing the flexibility of the application process for H-1B employers. In commenting on our report, Homeland Security and Labor officials expressed reservations about the feasibility of our suggested options, but Homeland Security officials also noted efforts under way to streamline the application process for prospective H-1B employers. For example, Homeland Security is currently testing a system to obtain and update some company data directly from a private data vendor, which could reduce the filing burden on H-1B petitioners in the future. In addition, Homeland Security recently proposed a rule that would provide for employers to register and learn whether they will be eligible to file petitions with USCIS prior to filing an LCA, which could reduce workloads for Labor and reduce some filing burden for companies. The total number of H-1B workers in the United States at any one point in time—and information about the length of their stay—is unknown due to data and system limitations. First, data systems among the various agencies that process H-1B applications are not easily linked, which makes it impossible to track individuals as they move through the application and entry process. Second, H-1B workers are not assigned a unique identifier that would allow agencies to track them over time or across agency databases—particularly if and when their visa status changes. Consequently, USCIS is not able to track the H-1B population with regard to: (1) how many approved H-1B workers living abroad have actually received an H-1B visa and/or ultimately entered the country; (2) whether and when H-1B workers have applied for or were granted legal permanent residency, leave the country, or remain in the country on an expired visa; and (3) the number of H-1B workers currently in the country or who have converted to legal permanent residency. Limitations in USCIS’s ability to track H-1B applications also hinder it from knowing precisely when and whether the annual cap has been reached each year—although the Immigration and Nationality Act requires the department to do so. According to USCIS officials, its current processes do not allow them to determine precisely when the cap on initial petitions is reached. To deal with this problem, USCIS estimates when the number of approvals has reached the statutory limit and stops accepting new petitions. Although USCIS is taking steps to improve its tracking of approved petitions and of the H-1B workforce, progress has been slow to date. Through its “Transformation Program,” USCIS is developing an electronic I-129 application system and is working with other agencies to create a cross-reference table of agency identifiers for individuals applying for visas that would serve as a unique person-centric identifier. When this occurs, it will be possible to identify who is in the United States at any one point in time under any and all visa programs. However, the agency faces challenges with finalizing and implementing the Transformation Program. We recommended that Homeland Security, through its Transformation Program, take steps to (1) ensure that linkages to State’s tracking system will provide Homeland Security with timely access to data on visa issuances, and (2) that mechanisms for tracking petitions and visas against the cap be incorporated into business rules to be developed for USCIS’s new electronic petition system. While a complete picture of the H-1B workforce is lacking, data on approved H-1B workers provides some information about the H-1B workforce. Between fiscal year 2000 and fiscal year 2009, the top four countries of birth for approved H-1B workers (i.e., approved initial and extension petitions from employers both subject to the cap and cap- exempt) were India, China, Canada, and the Philippines. Over 40 percent of all such workers were for positions in system analysis and programming. As compared to fiscal year 2000, in fiscal year 2009, approved H-1B workers were more likely to be living in the United States than abroad at the time of their initial application, to have an advanced degree, and to have obtained their graduate degrees in the United States. Finally, data on a cohort of approved H-1B workers whose petitions were submitted between January 2004 and September 2007, indicate that at least 18 percent of these workers subsequently applied for permanent residence in the United States—for which about half were approved, 45 percent were pending, and 3 percent were denied by 2010. The provisions of the H-1B program designed to protect U.S. workers— such as the requirement to pay prevailing wages, the visa’s temporary status, and the cap on the number of visas issued—are weakened by several factors. First, H-1B program oversight is shared by four federal agencies and their roles and abilities to coordinate are restricted by law. As a result, there is only nominal sharing of the kind of information that would allow for better employer screening or more active and targeted pursuit of program abuses. For example, the review of employer applications for H-1B workers is divided between Labor and USCIS, and the thoroughness of both these reviews is constrained by law. In reviewing the employer’s LCA, Labor is restricted to looking for missing information and obvious inaccuracies, such as an employer’s failure to checkmark all required boxes on a form denoting compliance. USCIS’s review of the visa petition, the I-129, is not informed by any information that Labor’s Employment and Training Administration may possess on suspicious or problematic employers. With regard to enforcement of the H-1B worker protections, Wage and Hour investigations are constrained, first, by the fact that its investigators do not receive from USCIS any information regarding suspicious or problematic employers. They also do not have access to the Employment and Training’s database of employer LCAs. Second, in contrast to its authority with respect to other labor protection programs, Wage and Hour lacks subpoena authority to obtain employer records for H-1B cases. According to investigators, it can take months, therefore, to pursue time-sensitive investigations when an employer is not cooperative. To improve Labor’s oversight over the H-1B program, we recommended that its Employment and Training Administration grant Wage and Hour searchable access to the LCA database. Further, we asked Congress to consider granting Labor subpoena power to obtain employer records during investigations under the H-1B program. To reduce duplication and fragmentation in the administration and oversight of the application process, consistent with past GAO matters for Congressional consideration, we asked Congress to consider streamlining the H-1B approval process by eliminating the separate requirement that employers first submit an LCA to Labor for review and certification, since another agency (USCIS) subsequently conducts a similar review of the LCA. Another factor that weakens protection for U.S. workers is the fact that the H-1B program lacks a legal provision to hold employers accountable to program requirements when they obtain H-1B workers through staffing companies. As previously noted, staffing companies contract H-1B workers out to other employers. At times, those employers may contract the H-1B worker out again, creating multiple middlemen, according to Wage and Hour officials (see fig. 2). They explained that the contractual relationship, however, does not transfer the obligations of the contractor for worker protection to subsequent employers. Wage and Hour investigators reported that a large number of the complaints they receive about H-1B employers were related to the activities of staffing companies. Investigators from the Northeast region—the region that receives the highest number of H-1B complaints—said that nearly all of the complaints they receive involve staffing companies and that the number of complaints are growing. H-1B worker complaints about these companies frequently pertained to unpaid “benching”—when a staffing company does not have a job placement for the H-1B worker and does not pay them. In January 2010, Homeland Security issued a memo—commonly referred to as the “Neufeld Memo”—on determining when there is a valid employer- employee relationship between a staffing company and an H-1B worker for whom it has obtained a visa; however officials indicated that it is too early to know if the memo has improved program compliance. To help ensure the full protection of H-1B workers employed through staffing companies, in our report we asked that Congress consider holding the employer where an H-1B visa holder performs work accountable for meeting program requirements to the same extent as the employer that submitted the LCA form. Finally, changes to program legislation have diluted program provisions for protecting U.S. workers by allowing visa holders to seek permanent residency, broadening the job and skill categories for H-1B eligibility, and establishing exemptions to the cap. The Immigration Act of 1990 removed the requirement that H-1B visa applicants have a residence in a foreign country that they have no intention of abandoning. Consequently, H-1B workers are able to pursue permanent residency in the United States and remain in the country for an unlimited period of time while their residency application is pending. The same law also broadened the job and skill categories for which employers could seek H-1B visas. Labor’s LCA data show that between June 2009 and July 2010, over 50 percent of the wage levels reported on approved LCAs were categorized as entry-level (i.e. paid the lowest prevailing wage levels). However, such data do not, by themselves, indicate whether these H-1B workers were generally less skilled than their U.S. counterparts, or whether they were younger or more likely to accept lower wages. Finally, exemptions to the H-1B cap have increased the number of H-1B workers beyond the cap. For example, 87,519 workers in 2009 were approved for visas (including both initial and extensions) to work for 6,034 cap-exempt companies. Taken together, the multifaceted challenges identified in our work show that the H-1B program, as currently structured, may not be used to its full potential and may be detrimental in some cases. Although we have recommended steps that executive agencies overseeing the program may take to improve tracking, administration, and enforcement, the data we present raise difficult policy questions about key program provisions that are beyond the jurisdiction of these agencies. The H-1B program presents a difficult challenge in balancing the need for high-skilled foreign labor with sufficient protections for U.S. workers. As Congress considers immigration reform in consultation with diverse stakeholders and experts—and while Homeland Security moves forward with its modernization efforts—this is an opportune time to re-examine the merits and shortcomings of key program provisions and make appropriate changes as needed. Such a review may include, but would not necessarily be limited to the qualifications required for workers eligible under the H-1B program, exemptions from the cap, the appropriateness of H-1B hiring by staffing companies, the level of the cap, and the role the program should play in the U.S. immigration system in relationship to permanent residency. If you or your staffs have any questions about this statement, please contact Andrew Sherrill at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to Andrew Sherrill (Director), Michele Grgich (Assistant Director) and Erin Godtland (Economist-in-Charge) led this engagement with writing and technical assistance from Nisha Hazra, Melissa Jaynes, Jennifer McDonald, Susan Bernstein (Education, Workforce and Income Security); and Rhiannon Patterson (Applied Research and Methods). Stakeholders included: Barbara Bovbjerg (Education, Workforce, and Income Security); Tom McCool (Applied Research and Methods); Ronald Fecso (Chief Statistician); Sheila McCoy and Craig Winslow (General Counsel); Hiwotte Amare and Shana Wallace (Applied Research and Methods); Richard Stana and Mike Dino (Homeland Security and Justice); Jess Ford (International Affairs and Trade). Barbara Steel-Lowney referenced the report. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony comments on the H-1B program. Congress created the current H-1B program in 1990 to enable U.S. employers to hire temporary, foreign workers in specialty occupations. The law capped the number of H-1B visas issued per fiscal year at 65,000, although the cap has fluctuated over time with legislative changes. The H-1B cap and the program itself have been a subject of continued controversy. Proponents of the program argue that it allows companies to fill important and growing gaps in the supply of U.S. workers, especially in the science and technology fields. Opponents of the program argue that there is no skill shortage and that the H-1B program displaces U.S. workers and undercuts their pay. Others argue that the eligibility criteria for the H-1B visa should be revised to better target foreign nationals whose skills are undersupplied in the domestic workforce. Our comments in this statement for the record are based on the results of our recent examination of the H-1B program, highlighting the key challenges it presents for H-1B employers, H-1B and U.S. workers, and federal agencies. Specifically, this statement presents information on (1) employer demand for H-1B workers; (2) how the H-1B cap impacts employers' costs and whether they move operations overseas; (3) the government's ability to track the cap and H-1B workers over time; and (4) how well the provisions of the H-1B program protect U.S. workers. From 2000 to 2009, the demand for new H-1B workers tended to exceed the cap, as measured by the numbers of initial petitions submitted by employers who are subject to the cap. While the majority (68 percent) of employers was approved for one H-1B worker, demand was driven to a great extent by a small number (fewer than 1 percent) of H-1B employers garnering over one quarter of all H-1B approvals. Cap-exempt employers, such as universities and research institutions, submitted over 14 percent of the initial petitions filed during this period. Most of the 34 H-1B employers GAO interviewed reported that the H-1B program and cap created additional costs for them, such as delays in hiring and projects, but said the global marketplace and access to skilled labor--not the cap--drive their decisions on whether to move activities overseas. Limitations in agency data and systems hinder tracking the cap and H-1B workers over time. For example, data systems among the various agencies that process these individuals are not linked so it is difficult to track H-1B workers as they move through the immigration system. System limitations also prevent the Department of Homeland Security from knowing precisely when and whether the annual cap has been reached each year. Provisions of the H-1B program that could serve to protect U.S. workers--such as the requirement to pay prevailing wages, the visa's temporary status, and the cap itself--are weakened by several factors. First, program oversight is fragmented between four agencies and restricted by law. Second, the H-1B program lacks a legal provision for holding employers accountable to program requirements when they obtain H-1B workers through a staffing company--a company that contracts out H-1B workers to other companies. Third, statutory changes made to the H-1B program over time--i.e. that broadened job and skill categories for H-1B eligibility, increased exceptions to the cap, and allowed unlimited H-1B visa extensions while holders applied for permanent residency--have in effect increased the pool of H-1B workers beyond the cap and lowered the bar for eligibility.
Since the 1960s, the United States has operated two separate operational polar-orbiting meteorological satellite systems. These systems are known as the Polar-orbiting Operational Environmental Satellites (POES), managed by the National Oceanic and Atmospheric Administration’s (NOAA) National Environmental Satellite, Data, and Information Service (NESDIS), and the Defense Meteorological Satellite Program (DMSP), managed by the Department of Defense (DOD). These satellites obtain environmental data that are processed to provide graphical weather images and specialized weather products, and that are the predominant input to numerical weather prediction models—all used by weather forecasters, the military, and the public. Polar satellites also provide data used to monitor environmental phenomena, such as ozone depletion and drought conditions, as well as data sets that are used by researchers for a variety of studies, such as climate monitoring. Unlike geostationary satellites, which maintain a fixed position above the earth, polar-orbiting satellites constantly circle the earth in an almost north-south orbit, providing global coverage of conditions that affect the weather and climate. Each satellite makes about 14 orbits a day. As the earth rotates beneath it, each satellite views the entire earth’s surface twice a day. Today, there are two operational POES satellites and two operational DMSP satellites that are positioned so that they can observe the earth in early morning, mid-morning, and early afternoon polar orbits. Together, they ensure that for any region of the earth, the data provided to users are generally no more than 6 hours old. Figure 1 illustrates the current operational polar satellite configuration. Besides the four operational satellites, there are five older satellites in orbit that still collect some data and are available to provide some limited backup to the operational satellites should they degrade or fail. In the future, both NOAA and DOD plan to continue to launch additional POES and DMSP satellites every few years, with final launches scheduled for 2008 and 2010, respectively. Each of the polar satellites carries a suite of sensors designed to detect environmental data either reflected or emitted from the earth, the atmosphere, and space. The satellites store these data and then transmit the data to NOAA and Air Force ground stations when the satellites pass overhead. The ground stations then relay the data via communications satellites to the appropriate meteorological centers for processing. Under a shared processing agreement among the four processing centers—NESDIS, the Air Force Weather Agency, Navy’s Fleet Numerical Meteorology and Oceanography Center, and the Naval Oceanographic Office—different centers are responsible for producing and distributing different environmental data sets, specialized weather and oceanographic products, and weather prediction model outputs via a shared network. Each of the four processing centers is also responsible for distributing the data to its respective users. For the DOD centers, the users include regional meteorology and oceanography centers as well as meteorology and oceanography staff on military bases. NESDIS forwards the data to NOAA’s National Weather Service for distribution and use by forecasters. The processing centers also use the Internet to distribute data to the general public. NESDIS is responsible for the long-term archiving of data and derived products from POES and DMSP. In addition to the infrastructure supporting satellite data processing noted above, properly equipped field terminals that are within a direct line of sight of the satellites can receive real-time data directly from the polar- orbiting satellites. There are an estimated 150 such field terminals operated by the U.S. government, many by DOD. Field terminals can be taken into areas with little or no data communications infrastructure— such as on a battlefield or ship—and enable the receipt of weather data directly from the polar-orbiting satellites. These terminals have their own software and processing capability to decode and display a subset of the satellite data to the user. Figure 2 depicts a generic data relay pattern from the polar-orbiting satellites to the data processing centers and field terminals. Polar satellites gather a broad range of data that are transformed into a variety of products for many different uses. When first received, satellite data are considered raw data. To make them usable, the processing centers format the data so that they are time-sequenced and include earth location and calibration information. After formatting, these data are called raw data records. The centers further process these raw data records into data sets, called sensor data records and temperature data records. These data records are then used to derive weather products called environmental data records (EDR). EDRs range from atmospheric products detailing cloud coverage, temperature, humidity, and ozone distribution; to land surface products showing snow cover, vegetation, and land use; to ocean products depicting sea surface temperatures, sea ice, and wave height; to characterizations of the space environment. Combinations of these data records (raw, sensor, temperature, and environmental data records) are also used to derive more sophisticated products, including outputs from numerical weather models and assessments of climate trends. Figure 3 is a simplified depiction of the various stages of data processing. EDRs can be either images or quantitative data products. Image EDRs provide graphical depictions of the weather and are used to observe meteorological and oceanographic phenomena to track operationally significant events (such as tropical storms, volcanic ash, and icebergs), and to provide quality assurance for weather prediction models. The following figures demonstrate polar-orbiting satellite images. Figure 4 is an image from a DMSP satellite showing an infrared picture taken over the west Atlantic Ocean. Figure 5 is a POES image of Hurricane Floyd, which struck the southern Atlantic coastline in 1999. Figure 6 is a polar- satellite image used to detect volcanic ash clouds, in particular the ash cloud resulting from the eruption of Mount Etna in 2001. Figure 7 shows the location of icebergs near Antarctica in February 2002. Quantitative EDRs are specialized weather products that can be used to assess the environment and climate or to derive other products. These EDRs can also be depicted graphically. Figures 8 and 9 are graphic depictions of quantitative data on sea surface temperature and ozone measurements, respectively. An example of a product that was derived from EDRs is provided in figure 10. This product shows how long a person could survive in the ocean—information used in military as well as search and rescue operations—and was based on sea surface temperature EDRs from polar-orbiting satellites. Another use of quantitative satellite data is in numerical weather prediction models. Based predominantly on observations from polar- orbiting satellites and supplemented by data from other sources such as geostationary satellites, radar, weather balloons, and surface observing systems, numerical weather prediction models are used in producing hourly, daily, weekly, and monthly forecasts of atmospheric, land, and ocean conditions. These models require quantitative satellite data to update their analysis of weather and to produce new forecasts. Table 1 provides examples of models run by the processing centers. Figure 11 depicts the output of one common model. All this information—satellite data, imagery, derived products, and model output—is used in mapping and monitoring changes in weather, climate, the ocean, and the environment. These data and products are provided to weather forecasters for use in issuing weather forecasts and warnings to the public and to support our nation’s aviation, agriculture, and maritime communities. Also, weather data and products are used by climatologists and meteorologists to monitor the environment. Within the military, these data and products allow military planners and tactical users to focus on anticipating and exploiting atmospheric and space environmental conditions. For example, Air Force Weather Agency officials told us that accurate wind and temperature forecasts are critical to any decision to launch an aircraft that will need mid-flight refueling. In addition to these operational uses of satellite data, there is also a substantial need for polar satellite data for research. According to experts in climate research, the research community requires long-term, consistent sets of satellite data collected sequentially, usually at fixed intervals of time, in order to study many critical climate processes. Examples of research topics include long- term trends in temperature, precipitation, and snow cover. Given the expectation that merging the POES and DMSP programs would reduce duplication and result in sizable cost savings, a May 1994 Presidential Decision Directive required NOAA and DOD to converge the two satellite programs into a single satellite program capable of satisfying both civilian and military requirements. The converged program is called the National Polar-orbiting Operational Environmental Satellite System (NPOESS), and it is considered critical to the United States’ ability to maintain the continuity of data required for weather forecasting and global climate monitoring. To manage this program, DOD, NOAA, and the National Aeronautics and Space Administration (NASA) have formed a tri- agency Integrated Program Office, located within NOAA. Within the program office, each agency has the lead on certain activities. NOAA has overall responsibility for the converged system, as well as satellite operations; DOD has the lead on the acquisition; and NASA has primary responsibility for facilitating the development and incorporation of new technologies into the converged system. NOAA and DOD share the costs of funding NPOESS, while NASA funds specific technology projects and studies. NPOESS is a major system acquisition estimated to cost almost $7 billion over the 24-year period from the inception of the program in 1995 through 2018. The program is to provide satellite development, satellite launch and operation, and integrated data processing. These deliverables are grouped into four main categories: (1) the launch segment, which includes the launch vehicle and supporting equipment, (2) the space segment, which includes the satellites and sensors, (3) the interface data processing segment, which includes the data processing system to be located at the four processing centers, and (4) the command, control, and communications segment, which includes the equipment and services needed to track and control satellites. Program acquisition plans call for the procurement and launch of six NPOESS satellites over the life of the program and the integration of 14 instruments, comprising 12 environmental sensors and 2 subsystems. Together, the sensors are to receive and transmit data on atmospheric, cloud cover, environmental, climate, oceanographic, and solar-geophysical observations. The subsystems are to support nonenvironmental search and rescue efforts and environmental data collection activities. According to the Integrated Program Office, 8 of the 14 planned NPOESS instruments involve new technology development, whereas 6 others are based on existing technologies. The planned instruments and the state of technology on each are listed in table 2. Unlike the current polar satellite program, in which the four centers use different approaches to process raw data into the environmental data records that they are responsible for, the NPOESS integrated data processing systemto be located at the four centers—is expected to provide a standard system to produce these data sets and products. The four processing centers will continue to use these data sets to produce other derived products, as well as for input to their numerical prediction models. NPOESS is planned to produce 55 EDRs, including atmospheric vertical temperature profile, sea surface temperature, cloud base height, ocean wave characteristics, and ozone profile. Some of these EDRs are comparable to existing products, whereas others are new. The user community designated six of these data products—supported by four sensors—as key EDRs, and noted that failure to provide them would cause the system to be reevaluated or the program to be terminated. The NPOESS acquisition program consists of three key phases: the concept and technology development phase, which lasted from roughly 1995 to early 1997; the program definition and risk reduction phase, which began in early 1997 and ended in August 2002; and the engineering and manufacturing development and production phase, which began in August 2002 and is expected to continue through the life of the program. The concept and technology development phase began with the decision to converge the POES and DMSP satellites and included early planning for the NPOESS acquisition. This phase included the successful convergence of the command and control of existing DMSP and POES satellites at NOAA’s satellite operations center. The program definition and risk reduction phase involved both system- level and sensor-level initiatives. At the system level, the program office awarded contracts to two competing prime contractors to prepare for NPOESS system performance responsibility. These contractors developed unique approaches to meeting requirements, designing system architectures, and developing initiatives to reduce sensor development and integration risks. These contractors competed for the development and production contract. At the sensor level, the program office awarded contracts to develop five sensors. This phase ended when the development and production contract was awarded. At that point, the winning contractor was expected to assume overall responsibility for managing continued sensor development. The final phase, engineering and manufacturing development and production, began when the development and production contract was awarded to TRW in August 2002. At that time, TRW assumed system performance responsibility for the overall program. This responsibility includes all aspects of design, development, integration, assembly, test and evaluation, operations, and on-orbit support. Shortly after the contract was awarded, Northrop Grumman Space Technology purchased TRW and became the prime contractor on the NPOESS project. In May 1997, the Integrated Program Office assessed the technical, schedule, and cost risks of key elements of the NPOESS program, including (1) overall system integration, (2) the launch segment, (3) the space segment, (4) the interface data processing segment, and (5) the command, control, and communications segment. As a result of this assessment, the program office determined that three elements had high risk components: the interface data processing segment, the space segment, and the overall system integration. Specifically, the interface data processing segment and overall system integration were assessed as high risk in all three areas (technical, cost, and schedule), whereas the space segment was assessed to be high risk in the technical and cost areas, and moderate risk in the schedule area. The launch segment and the command, control, and communications segment were determined to present low or moderate risks. The program office expected to reduce its high risk components to low and moderate risks by the time the development and production contract was awarded, and to have all risk levels reduced to low before the first launch. Table 3 displays the results of the 1997 risk assessment as well as the program office’s estimated risk levels by August 2002 and by first launch. In order to meet its goals of reducing program risks, the program office developed and implemented multiple risk reduction initiatives. One risk reduction initiative specifically targeted the space segment risks by initiating the development of key sensor technologies in advance of the satellite system itself. Because environmental sensors have historically taken 8 years to develop, the program office began developing six of the eight sensors with more advanced technologies early. In the late 1990s, the program office awarded contracts for the development, analysis, simulation, and prototype fabrication of five of these sensors. In addition, NASA awarded a contract for the early development of one other sensor. Responsibility for delivering these sensors was transferred from the program office to the prime contractor when the NPOESS contract was awarded in August 2002. Another major risk reduction initiative expected to address risks in three of the four segments with identified risks is called the NPOESS Preparatory Project (NPP). NPP is a planned demonstration satellite to be launched in 2006, several years before the first NPOESS satellite launch in 2009. It is scheduled to host three of the four critical NPOESS sensors (the visible/infrared imager radiometer suite, the cross-track infrared sounder, and the advanced technology microwave sounder), as well as two other noncritical sensors. Further, NPP will provide the program office and the processing centers an early opportunity to work with the sensors, ground control, and data processing systems. Specifically, this satellite is expected to demonstrate about half of the NPOESS EDRs and about 93 percent of its data processing load. Since our statement last year, the Integrated Program Office has made further progress on NPOESS. Specifically, it awarded the contract for the overall program and is monitoring and managing contract deliverables, including products that will be tested on NPP. The program office is also continuing to work on various other risk reduction activities, including learning from experiences with sensors on existing platforms, including NASA research satellites, the WINDSAT/Coriolis weather satellite, and the NPOESS airborne sounding testbed. While the program office has made progress both on the acquisition and risk reduction activities, the NPOESS program faces key programmatic and technical risks that may affect the successful and timely deployment of the system. Specifically, changing funding streams and revised schedules have delayed the expected launch date of the first NPOESS satellite, and concerns with the development of key sensors and the data processing system may cause additional delays in the satellite launch date. These planned and potential schedule delays could affect the continuity of weather data. Addressing these risks may result in increased costs for the overall program. In attempting to address these risks, the program office is working to develop a new cost and schedule baseline for the NPOESS program, which it hopes to complete by August 2003. When the NPOESS development contract was awarded, program office officials identified an anticipated schedule and funding stream for the program. The schedule for launching the satellites was driven by a requirement that the satellites be available to back up the final POES and DMSP satellites should anything go wrong during these satellites’ planned launches. In general, program officials anticipate that roughly 1 out of every 10 satellites will fail either during launch or during early operations after launch. Key program milestones included (1) launching NPP by May 2006 in order to allow time to learn from that risk reduction effort, (2) having the first NPOESS satellite available to back up the final POES satellite launch in March 2008, and (3) having the second NPOESS satellite available to back up the final DMSP satellite launch in October 2009. If the NPOESS satellites were not needed to back up the final predecessor satellites, their anticipated launch dates would have been April 2009 and June 2011, respectively. However, a DOD program official reported that between 2001 and 2002, the agency experienced delays in launching a DMSP satellite, causing delays in the expected launch dates of another DMSP satellite. In late 2002, DOD shifted the expected launch date for the final DMSP satellite from 2009 to 2010. As a result, DOD reduced funding for NPOESS by about $65 million between fiscal years 2004 and 2007. According to NPOESS program officials, because NOAA is required to provide no more funding than DOD does, this change triggered a corresponding reduction in funding by NOAA for those years. As a result of the reduced funding, program office officials were forced to make difficult decisions about what to focus on first. The program office decided to keep NPP as close to its original schedule as possible because of its importance to the eventual NPOESS development, and to shift some of the NPOESS deliverables to later years. This shift will affect the NPOESS deployment schedule. Table 4 compares the program office’s current estimates for key milestones, given current funding levels. As a result of the changes in funding between 2003 and 2007, project office officials estimate that the first NPOESS satellite will be available for launch 21 months after it is needed to back up the final POES satellite. This means that should the final POES launch fail in March 2008, there would be no backup satellite ready for launch. Unless the existing operational satellite is able to continue operations beyond its expected lifespan, there could be a gap in satellite coverage. Figure 12 depicts the schedule delay. We have reported on concerns about gaps in satellite coverage in the past. In the early 1990s, the development of the second generation of NOAA’s geostationary satellites experienced severe technical problems, cost overruns, and schedule delays, resulting in a 5-year schedule slip in the launch of the first satellite; this schedule slip left NOAA in danger of temporarily losing geostationary satellite data coverage—although no gap in coverage actually occurred. In 2000, we reported that geostationary satellite data coverage was again at risk because of a delay in a satellite launch due to a problem with the engine of its launch vehicle. At that time, existing satellites were able to maintain coverage until the new satellite was launched over a year later—although one satellite had exceeded its expected lifespan and was using several backup systems in cases where primary systems had failed. DOD experienced the loss of DMSP satellite coverage in the 1970s, which led to increased recognition of the importance of polar-orbiting satellites and of the impact of the loss of satellite data. In addition to the schedule issues facing the NPOESS program, concerns have arisen regarding key components. Although the program office reduced some of the risks inherent in developing new technologies by initiating the development of these sensors early, individual sensor development efforts have experienced cost increases, schedule delays, and performance shortfalls. The cost estimates for all four critical sensors (the ones that are to support the most critical NPOESS EDRs) have increased, due in part to including items that were not included in the original estimates, and in part to addressing technical issues. These increases range from approximately $60 million to $200 million. Further, while all the sensors are still expected to be completed within schedule, many have slipped to the end of their schedule buffers—meaning that no additional time is available should other problems arise. Details on the status and changes in cost and schedule of four critical sensors are provided in table 5. The timely development of three of these sensors (the visible/infrared imager radiometer suite, the cross-track infrared sounder, and the advanced technology microwave sounder) is especially critical, because these sensors are to be demonstrated on the NPP satellite, currently scheduled for launch in October 2006. Critical sensors are also falling short of achieving the required levels of performance. As part of a review in early 2003, the program officials determined that all four critical sensors were at medium to high risk of shortfalls in performance. Program officials recently reported that since the time of that review, the concerns that led to those risk designations have been addressed, which contributed to the schedule delays and cost increases noted above. We have not evaluated the closure of these risk items. However, program officials acknowledge that there are still performance issues on two critical sensors which they are working to address. Specifically, officials reported that they are working to fix a problem with radio frequency interference on the conical microwave imager/sounder. Also, the program office is working with NASA to fix problems with electrostatic discharge procedures and misalignment of key components on the advanced technology microwave sounder. Further, the program office will likely continue to identify additional performance issues as the sensors are developed and tested. Officials anticipate that there could be cost increases and schedule delays associated with addressing performance issues. Program officials reported that these and other sensor problems are not unexpected; previous experience with such problems was what motivated them to begin developing the sensors early. However, officials acknowledge that continued problems could affect the sensors’ delivery dates and potentially delay the NPP launch. Any delay in that launch date could affect the overall NPOESS program because the success of the program depends on learning lessons in data processing and system integration from the NPP satellite. The interface data processing system is a ground-based system that is to process the sensors’ data so that they are usable by the data processing centers and the broader community of environmental data users. The development of this system is critical for both NPP and NPOESS. When used with NPP, the data processing system is expected to produce 26 of the 55 EDRs that NPOESS will provide, processing approximately 93 percent of the planned volume of NPOESS data. Further, the central processing centers will be able to work with these EDRs to begin developing their own specialized products with NPP data. These activities will allow system users to work through any problems well in advance of when the NPOESS data are needed. We reported last year that the volumes of data that NPOESS will provide present immense challenges to the centers’ infrastructures and to their scientific capability to use these additional data effectively in weather products and models. We also noted that the centers need time to incorporate these new data into their products and models. Using the data processing system in conjunction with NPP will allow them to begin to do so. While the data processing segment is currently on schedule, program officials acknowledge the potential for future schedule delays. Specifically, an initial version of the data processing system is on track to be delivered at the end of July, and a later version is being planned. However, the data processing system faces potential risks that could affect the availability of NPP and in turn NPOESS. Specifically, program officials reported that there is a risk that the roughly 32 months allocated for developing the remaining software and delivering, installing, and verifying the system at two central processing centers will not be sufficient. A significant portion of the data processing system software involves converting scientific algorithms for operational use, but program officials noted that there is still uncertainty in how much time and effort it will take to complete this conversion. Any significant delays could cause the potential coverage gap between the launches of the final POES and first NPOESS satellites to grow even larger. Program officials are working to address the changes in funding levels and schedule, and to make plans for addressing specific sensor and data processing system risks. They acknowledge that delays in the program and efforts to address risks on key components could increase the overall cost of the program, which could result on the loss of some or all of the promised cost savings from converging the two separate satellite systems. However, estimates on these cost increases are still being determined. The program office is working to develop a new cost and schedule baseline based on the fiscal year 2004 President’s budget for the NPOESS program. Officials noted that this rebaselining effort will involve a major contract renegotiation. Program officials reported that they hope to complete the new program baseline by August 2003. In summary, today’s polar-orbiting weather satellite program is essential to a variety of civilian and military operations, ranging from weather warnings and forecasts to specialized weather products. NPOESS is expected to merge today’s two separate satellite systems into a single state-of-the-art weather and environmental monitoring satellite system to support all military and civilian users, as well as the public. This new satellite system is considered critical to the United States’ ability to maintain the continuity of data required for weather forecasting and global climate monitoring through the year 2018, and the first satellite was expected to be ready to act as a backup should the launch of the final satellites in the predecessor POES and DMSP programs fail. The NPOESS program office has made progress over the last years in trying to reduce project risks by developing critical sensors early and by planning the NPOESS Preparatory Project to demonstrate key sensors and the data processing system well before the first NPOESS launch. However, the NPOESS program faces key programmatic and technical risks that may affect the successful and timely deployment of the system. Specifically, changing funding streams and revised schedules have delayed the expected launch date of the first NPOESS satellite, and concerns with the development of key sensors and the data processing system may cause additional delays in the satellite launch date. These factors could affect the continuity of weather data needed for weather forecasts and climate monitoring. This concludes my statement. I would be pleased to respond to any questions that you or other members of the Subcommittee may have at this time. If you have any questions regarding this testimony, please contact David Powner at (202) 512-9286 or by E-mail at [email protected]. Individuals making key contributions to this testimony include Barbara Collier, John Dale, Ramnik Dhaliwal, Colleen Phillips, and Cynthia Scott. Our objectives were to provide an overview of our nation’s current polar- orbiting weather satellite program and the planned National Polar-orbiting Operational Environmental Satellite System (NPOESS) program and to identify key risks to the successful and timely deployment of NPOESS. To provide an overview of the nation’s current and future polar-orbiting weather satellite system programs, we relied on prior GAO reviews of the satellite programs of the National Oceanic and Atmospheric Administration (NOAA) and the Department of Defense (DOD). We reviewed documents from NOAA, DOD, and the National Aeronautics and Space Administration (NASA) that describe the purpose and origin of the polar satellite program and the status of the NPOESS program. We also interviewed Integrated Program Office and NASA officials to determine the program’s background, status, and plans. To identify key risks to the successful and timely deployment of NPOESS, we assessed the NPOESS acquisition status and program risk reduction efforts to understand how the program office plans to manage the acquisition and mitigate the risks to successful NPOESS implementation. We reviewed descriptions of the NPOESS sensors and interviewed officials at the Integrated Program Office, NASA, and DOD to determine the status of key sensors, program segments, and risk reduction activities. We also reviewed documents and interviewed program office officials on plans to address NPOESS challenges. NOAA, DOD, and NASA officials generally agreed with the facts as presented in this statement and provided some technical corrections, which we have incorporated. We performed our work at the NPOESS Integrated Program Office, NASA headquarters, and DOD offices, all located in the Washington, D.C., metropolitan area. Our work was performed between April and July 2003 in accordance with generally accepted government auditing standards. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Polar-orbiting environmental satellites provide data and imagery that are used by weather forecasters, climatologists, and the military to map and monitor changes in weather, climate, the ocean, and the environment. The current polar satellite program is a complex infrastructure that includes two satellite systems, supporting ground stations, and four central data processing centers. In the future, the National Polar-orbiting Operational Environmental Satellite System (NPOESS) is to merge the two current satellite systems into a single state-of-the-art environment monitoring satellite system. This new $7 billion satellite system is considered critical to the United States' ability to maintain the continuity of data required for weather forecasting and global climate monitoring through the year 2018. In its testimony GAO was asked, among other topics, to discuss risks to the success of the NPOESS deployment. The NPOESS program faces key programmatic and technical risks that may affect the successful and timely deployment of the system. The original plan for NPOESS was that it would be available to serve as a backup to the March 2008 launch of the final satellite in one of the two current satellite programs--the Polar-orbiting Operational Environmental Satellite (POES) system. However, changing funding streams and revised schedules have delayed the expected launch date of the first NPOESS satellite by 21 months. Thus, the first NPOESS satellite will not be ready in time to back up the final POES satellite, resulting in a potential gap in satellite coverage should that satellite fail. Specifically, if the final POES launch fails and if existing satellites are unable to continue operations beyond their expected lifespans, the continuity of weather data needed for weather forecasts and climate monitoring will be put at risk. Moreover, concerns with the development of key NPOESS components, including critical sensors and the data processing system, may cause additional delays in the satellite launch date. The program office is working to address the changes in funding levels and schedule, and to make plans for addressing specific risks. Further, it is working to develop a new cost and schedule baseline for the NPOESS program by August 2003.
Historically, the U.S. government has granted federal recognition through treaties, congressional acts, or administrative decisions within the executive branch—principally by the Department of the Interior. In a 1977 report to the Congress, the American Indian Policy Review Commission criticized the department’s tribal recognition policy. Specifically, the report stated that the department’s criteria to assess whether a group should be recognized as a tribe were not clear and concluded that a large part of the department’s policy depended on which official responded to the group’s inquiries. Nevertheless, until the 1960s, the limited number of requests for federal recognition gave the department the flexibility to assess a group’s status on a case-by-case basis without formal guidelines. However, in response to an increase in the number of requests for federal recognition, the department determined that it needed a uniform and objective approach to evaluate these requests. In 1978, it established a regulatory process for recognizing tribes whose relationship with the United States had either lapsed or never been established— although tribes may seek recognition through other avenues, such as legislation or Department of the Interior administrative decisions unconnected to the regulatory process. In addition, not all tribes are eligible for the regulatory process. For example, tribes whose political relationship with the United States has been terminated by Congress, or tribes whose members are officially part of an already recognized tribe, are ineligible to be recognized through the regulatory process and must seek recognition through other avenues. The regulations lay out seven criteria that a group must meet before it can become a federally recognized tribe. Essentially, these criteria require the petitioner to show that it is descended from a historic tribe and is a distinct community that has continuously existed as a political entity since a time when the federal government broadly acknowledged a political relationship with all Indian tribes. The following are the seven criteria for recognition under the regulatory process: (a) The petitioner has been identified as an American Indian entity on a substantially continuous basis since 1900, (b) A predominant portion of the petitioning group comprises a distinct community and has existed as a community from historical times until the present, (c) The petitioner has maintained political influence or authority over its members as an autonomous entity from historical times until the present, (d) The group must provide a copy of its present governing documents and membership criteria, (e) The petitioner’s membership consists of individuals who descend from a historical Indian tribe or tribes, which combined and functioned as a single autonomous political entity, (f) The membership of the petitioning group is composed principally of persons who are not members of any acknowledged North American Indian tribe, and (g) Neither the petitioner nor its members are the subject of congressional legislation that has expressly terminated or forbidden recognition. The burden of proof is on petitioners to provide documentation to satisfy the seven criteria. A technical staff within BIA, consisting of historians, anthropologists, and genealogists, reviews the submitted documentation and makes its recommendations on a proposed finding either for or against recognition. Staff recommendations are subject to review by the department’s Office of the Solicitor and senior BIA officials. The Assistant Secretary-Indian Affairs makes the final decision regarding the proposed finding, which is then published in the Federal Register and a period of public comment, document submission, and response is allowed. The technical staff reviews the comments, documentation, and responses and makes recommendations on a final determination that are subject to the same levels of review as a proposed finding. The process culminates in a final determination by the Assistant Secretary, who, depending on the nature of further evidence submitted, may or may not rule the same was as was ruled for the proposed finding. Petitioners and others may file requests for reconsideration with the Interior Board of Indian Appeals. While we found general agreement on the seven criteria that groups must meet to be granted recognition, there is great potential for disagreement when the question before BIA is whether the level of available evidence is high enough to demonstrate that a petitioner meets the criteria. The need for clearer guidance on criteria and evidence used in recognition decisions became evident in a number of recent cases when the previous Assistant Secretary approved either proposed or final decisions to recognize tribes when the technical staff had recommended against recognition. Most recently, the current Assistant Secretary has reversed a decision made by the previous Assistant Secretary. Much of the current controversy surrounding the regulatory process stems from these cases. At the heart of the uncertainties are different positions on what a petitioner must present to support two key aspects of the criteria. In particular, there are differences over (1) what is needed to demonstrate continuous existence and (2) what proportion of members of the petitioning group must demonstrate descent from a historic tribe. Concerns over what constitutes continuous existence have centered on the allowable gap in time during which there is limited or no evidence that a petitioner has met one or more of the criteria. In one case, the technical staff recommended that a petitioner not be recognized because there was a 70-year period for which there was no evidence that the petitioner satisfied the criteria for continuous existence as a distinct community exhibiting political authority. The technical staff concluded that a 70-year evidentiary gap was too long to support a finding of continuous existence. The staff based its conclusion on precedent established through previous decisions in which the absence of evidence for shorter periods of time had served as grounds for finding that petitioners did not meet these criteria. However, in this case, the previous Assistant Secretary determined that the gap was not critical and issued a proposed finding to recognize the petitioner, concluding that continuous existence could be presumed despite the lack of specific evidence for a 70-year period. The regulations state that lack of evidence is cause for denial but note that historical situations and inherent limitations in the availability of evidence must be considered. The regulations specifically decline to define a permissible interval during which a group could be presumed to have continued to exist if the group could demonstrate its existence before and after the interval. They further state that establishing a specific interval would be inappropriate because the significance of the interval must be considered in light of the character of the group, its history, and the nature of the available evidence. Finally, the regulations note that experience has shown that historical evidence of tribal existence is often not available in clear, unambiguous packets relating to particular points in time Controversy and uncertainty also surround the proportion of a petitioner’s membership that must demonstrate that it meets the criterion of descent from a historic Indian tribe. In one case, the technical staff recommended that a petitioner not be recognized because the petitioner could only demonstrate that 48 percent of its members were descendants. The technical staff concluded that finding that the petitioner had satisfied this criterion would have been a departure from precedent established through previous decisions in which petitioners found to meet this criterion had demonstrated a higher percentage of membership descent from a historic tribe. However, in the proposed finding, the Assistant Secretary found that the petitioner satisfied the criterion. The Assistant Secretary told us that although this decision was not consistent with previous decisions by other Assistant Secretaries, he believed the decision to be fair because the standard used for previous decisions was unfairly high. Again, the regulations intentionally left open key aspects of the criteria to interpretation. In this case they avoid establishing a specific percentage of members required to demonstrate descent because the significance of the percentage varies with the history and nature of the petitioner and the particular reasons why a portion of the membership may not meet the requirements of the criterion. The regulations state only that a petitioner’s membership must consist of individuals who descend from historic tribes—no minimum percentage or quantifying term such as “most” or “some” is used. The only additional direction is found in 1997 guidelines, which note that petitioners need not demonstrate that 100 percent of their membership satisfies the criterion In updating its regulations in 1994, the department grappled with both these issues and ultimately determined that key aspects of the criteria should be left open to interpretation to accommodate the unique characteristics of individual petitions. Leaving key aspects open to interpretation increases the risk that the criteria may be applied inconsistently to different petitioners. To mitigate this risk, BIA uses precedents established in past decisions to provide guidance in interpreting key aspects of the criteria. However, the regulations and accompanying guidelines are silent regarding the role of precedent in making decisions or the circumstances that may cause deviation from precedent. Thus, petitioners, third parties, and future decisionmakers, who may want to consider precedents in past decisions, have difficulty understanding the basis for some decisions. Ultimately, BIA and the Assistant Secretary will still have to make difficult decisions about petitions when it is unclear whether a precedent applies or even exists. Because these circumstances require judgment on the part of the decisionmaker, public confidence in BIA and the Assistant Secretary as key decisionmakers is extremely important. A lack of clear and transparent explanations for their decisions could cast doubt on the objectivity of the decisionmakers, making it difficult for parties on all sides to understand and accept decisions, regardless of the merit or direction of the decisions reached. Accordingly, in our November 2001 report, we recommended that the Secretary of the Interior direct BIA to provide a clearer understanding of the basis used in recognition decisions by developing and using transparent guidelines that help interpret key aspects of the criteria and supporting evidence used in federal recognition decisions. In commenting on a draft of this report, the department generally agreed with this recommendation. To implement the recommendation, the department pledged to formulate a strategic action plan by May 2002. To date, this plan is still in draft form. Officials told us that they anticipate completing the plan soon. In conclusion, BIA’s recognition process was never intended to be the only way groups could receive federal recognition. Nevertheless, it was intended to provide the Department of the Interior with an objective and uniform approach by establishing specific criteria and a process for evaluating groups seeking federal recognition. It is also the only avenue to federal recognition that has established criteria and a public process for determining whether groups meet the criteria. However, weaknesses in the process have created uncertainty about the basis for recognition decisions, calling into question the objectivity of the process. Without improvements that focus on fixing these and other problems on which we have reported, parties involved in tribal recognition may increasingly look outside of the regulatory process to the Congress or courts to resolve recognition issues, preventing the process from achieving its potential to provide a more uniform approach to tribal recognition. The result could be that the resolution of tribal recognition cases will have less to do with the attributes and qualities of a group as an independent political entity deserving a government-to-government relationship with the United States, and more to do with the resources that petitioners and third parties can marshal to develop successful political and legal strategies. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time.
Federal recognition of an Indian tribe can dramatically affect economic and social conditions for the tribe and the surrounding communities because these tribes are eligible to participate in federal assistance programs. There are currently 562 recognized tribes with a total membership of 1.7 million, and several hundred groups are currently seeking recognition. In fiscal year 2002, Congress appropriated $5 billion for programs and funding, almost exclusively for recognized tribes. Recognition also establishes a formal government-to-government relationship between the United States and a tribe. The Indian Gaming Regulatory Act of 1988, which regulated Indian gaming operations, permits a tribe to operate casinos on land in trust if the state in which it lies allows casino-like gaming and if the tribe has entered into a compact with the state regulating its gaming businesses. In 1999, federally recognized tribes reported $10 billion in gaming revenue, surpassing the amounts that the Nevada casinos collected that year. Owing to the rights and benefits that accrue with recognition and the controversy surrounding Indian gaming, the Bureau of Indian Affairs' (BIA) regulatory process has been subject to intense scrutiny by groups seeking recognition and other interested parties--including already recognized tribes and affected state and local governments. BIA's regulatory process for recognizing tribes was established in 1978 and requires that groups that are petitioning for recognition submit evidence that they meet certain criteria--basically that the petitioner has continuously existed as an Indian tribe since historic times. Critics of the process claim that it produces inconsistent decisions and takes too long. The basis for BIA's tribal recognition decisions is not always clear. Although there are set criteria that petitioning tribes must meet to be granted recognition, there is no guidance that clearly explains how to interpret key aspects of the criteria. The lack of guidance over what level of evidence is sufficient to demonstrate that a tribe has continued to exist over time creates controversy and uncertainty for all parties about the basis for decisions reached.
Reset encompasses activities related to the repair, upgrade, or replacement of equipment used in contingency operations. Aviation and ground equipment are managed separately within the Marine Corps, and different definitions of reset are used for each. Marine Corps officials defined aviation equipment reset as an aircraft material condition and readiness sustainment effort that is required due to prolonged combat operations. Included are actions to maintain, preserve, and enhance the capability of aircraft. Ground equipment reset is defined by the Marine Corps as actions taken to restore units to a desired level of combat capability commensurate with the unit’s future mission. It encompasses maintenance and supply activities that restore and enhance equipment that was destroyed, damaged, stressed, rendered obsolete, or worn out beyond economic repair due to combat operations by repairing, rebuilding, or procuring replacement equipment. Also included as part of ground equipment reset is recapitalization (rebuild or upgrade) that enhances existing equipment through the insertion of new technology or restores selected equipment to near-original condition. The Marine Corps’s equipment reset budget totals more than $8 billion for fiscal years 2009 through 2012. Maintenance-related activities included as part of reset are funded from operations and maintenance appropriations, while most recapitalization and all acquisitions of new equipment as part of reset are funded from procurement appropriations. Reset funds are requested and budgeted separately for aviation and ground equipment.  Aviation equipment: The Marine Corps’ aviation equipment reset budget was approximately $66.7 million in fiscal year 2009 and approximately $57.8 million in fiscal year 2010. The Marine Corps requested approximately $56.1 million for fiscal 2011 and has requested $45.3 million for fiscal year 2012 to reset aviation equipment. As discussed later in this report, reset funding for aviation equipment covers only operations and maintenance appropriations and excludes procurement appropriations.  Ground equipment: The Marine Corps’ ground equipment reset budget was approximately $2.2 billion in fiscal year 2009 and approximately $1.3 billion in fiscal year 2010. The Marine Corps requested approximately $2.6 billion for fiscal year 2011 and has requested $1.8 billion for fiscal year 2012 to reset ground equipment. This funding includes funds requested as part of operations and maintenance appropriations and procurement appropriations. The fiscal year 2011 request included a $1.1 billion increase in procurement funding over fiscal year 2010, which the Marine Corps attributed to increased equipment combat losses and to the replacement of equipment that is beyond economic repair. Appendix II provides further detail on reset funding for aviation and ground equipment. Our prior work has shown that sound strategic management planning can enable organizations to identify and achieve long-range goals and objectives. We have identified six elements that should be incorporated into strategic plans to establish a comprehensive, results-oriented framework—an approach whereby program effectiveness is measured in terms of outcomes or impact. These elements follow: (1) Mission statement: A statement that concisely summarizes what the organization does, presenting the main purposes for all its major functions and operations. (2) Long-term goals: A specific set of policy, programmatic, and management goals for the programs and operations covered in the strategic plan. The long-term goals should correspond to the purposes set forth in the mission statement and develop with greater specificity how an organization will carry out its mission. (3) Strategies to achieve the goals: A description of how the goals contained in the strategic plan and performance plan are to be achieved, including the operational processes, skills and technology, and other resources required to meet these goals. (4) External factors that could affect goals: Key factors external to the organization and beyond its control that could significantly affect the achievement of the long-term goals contained in the strategic plan. These external factors can include economic, demographic, social, technological, or environmental factors, as well as conditions or events that would affect the organization’s ability to achieve its strategic goals. (5) Use of metrics to gauge progress: A set of metrics that will be applied to gauge progress toward attainment of the plan’s long-term goals. (6) Evaluations of the plan to monitor goals and objectives: Assessments, through objective measurement and systematic analysis, of the manner and extent to which programs associated with the strategic plan achieve their intended goals. Over the past several years we have reported on equipment reset issues. In 2007, for example, we reported that the Marine Corps could not be certain that its reset strategies would sustain equipment availability for deployed units as well as units preparing for deployment, while meeting ongoing operational requirements. We have also made recommendations aimed at improving DOD’s monthly cost reports for reset and defining the types of costs that should be included in the base defense budget rather than funded from supplemental appropriations for contingency operations. Specifically, we recommended DOD amend its Financial Management Regulation to require that monthly Supplemental and Cost of War Execution Reports identify expenditures within the procurement accounts for equipment reset at more detailed subcost category levels, similar to reporting of obligations and expenditures in the operation and maintenance accounts. DOD initially disagreed with this recommendation but later revised its Financial Management Regulation, expanding the definition of acceptable maintenance and procurement costs and directing the military services to begin including “longer war on terror” costs in their overseas contingency operations funding requests. We subsequently recommended that DOD issue guidance defining what constitutes the “longer war on terror,” to identify what costs are related to that longer war and to build these costs into the base defense budget. While the department concurred with this recommendation and stated that it has plans to revise its Financial Management Regulation accordingly, it has not yet done so. The Office of Management and Budget (OMB) has issued budget formulation guidance for DOD that addresses overseas contingency operations, including reset funding. Guidance issued in February 2009 provided new criteria for DOD to use when preparing its budget request to assess whether funding, including funding for reset, should be requested as part of the base budget or as part of the budget for overseas contingency operations. The criteria identified geographic areas where overseas contingency operations funding could be used; provided a list of specific categories of spending that should be included in the overseas contingency budget, such as major equipment repairs, ground equipment replacement, equipment modifications, and aircraft replacement; and identified certain spending that should be excluded from the overseas contingency operations budget (i.e., should be included in the base budget) such as funding to support family services at home stations. For example, funding is excluded for the replacement of equipment losses already programmed for replacement in the Future Years Defense Plan. In September 2010, OMB issued updated criteria to, among other things, clarify language and eliminate areas of confusion. DOD has also issued its own budget formulation guidance for overseas contingency operations. In December 2009, DOD issued Resource Management Decision 700 to regulate the funding of the military services’ readiness accounts and to require that significant resources from the overseas contingency operations funding be moved into the base defense budget. Specifically, the services’ 2012 Program Objective Memorandum submissions for overseas contingency operations funding are restricted to resource levels appropriate for planned and projected troop levels. To facilitate the implementation of this guidance within the department, Resource Management Decision 700 outlines several actions for organizations to take. For example, it directed the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics, in coordination with the Director of Cost Assessment and Program Evaluation, the military services, the DOD Comptroller, and the Joint Staff, to conduct periodic reviews of the services’ in-theater maintenance activities and reset maintenance actions that include an assessment of the relationship between maintenance-funded base programs and contingency operations. This assessment was provided to the Deputy Secretary of Defense in July 2010. The Director of Cost Assessment and Program Evaluation tracks estimated total reset costs across the department based on data provided by the services. The total reset costs are the amount of funding needed to reset all equipment used in contingency operations if the operations were to cease. Specifically, the total reset costs equal the sum of the annual unbudgeted reset liability and the annual budgeted reset. The annual unbudgeted reset liability is the amount of equipment eligible for reset that stays in theater and is not reset during the budget year, based on operational decisions. The annual budgeted reset is the amount of equipment planned to return from operations that requires funds budgeted for reset. As part of its ground equipment reset strategy for Iraq, the Marine Corps developed the Reset Cost Model to generate cost estimates for the service’s supplemental budget requests. Additionally, the Reset Cost Model allows the Marine Corps to estimate reset costs for ground equipment, including budgeted and unbudgeted reset costs. Since the Reset Cost Model is focused on ground equipment employed in the U.S. Central Command area of responsibility, the Marine Corps continues to use the Reset Cost Model to develop overseas contingency operations budget requests for ground equipment used in Afghanistan. The cost estimates generated by the Reset Cost Model are based on the four possible reset actions:  First, equipment returning from theater is inspected to determine if depot-level repairs are required. Depot maintenance actions are conducted if the estimated cost of repair for the equipment is 65 percent or less than the latest acquisition cost.  Second, ground equipment used in operations is evaluated at various locations throughout the logistics chain to determine if the equipment requires field-level maintenance. These maintenance actions are conducted by operating forces.  Third, upon return to the continental United States, equipment identified as obsolete or uneconomical to repair is replaced through procurement as its reset action.  Fourth, if equipment acquired for combat operations does not have a long-term requirement within the Marine Corps, no reset maintenance actions are taken unless there is an immediate requirement in another campaign or theater of operations. Estimating aviation equipment reset costs follows a separate process. For aviation equipment reset, the Marine Corps has a process for requirements determination, budgeting, and execution, all of which are included in the annual budget process. According to Navy and Marine Corps officials, a clearly defined process is used to determine reset costs for aviation equipment that includes requirements generated from the fleet while working closely with the Chief of Naval Operation Fleet Readiness Division and each of the program offices to determine current and future reset requirements. Overseas contingency costs—including reset costs—are generated using issue sheets that record information on each item such as the categorization of funding, the amount of funding requested for a specific item, the number of items requested, and the cost per unit. Once the issue sheets are generated, Headquarters, Marine Corps, and the Commander of Naval Air Forces prioritize the issue sheets and provide a finalized list of the funding priorities according to current needs for which future funding is allocated. The Marine Corps has developed an annual aviation plan and an aviation reset program policy that together constitute its reset strategy for aviation equipment used in Afghanistan. Although separate documents, the annual aviation plan and aviation reset program policy are linked through the aviation plans’ reference to the aviation reset policy. Our evaluation of this reset strategy shows that it incorporates the six elements of a comprehensive, results-oriented strategic planning framework. For example, the reset strategy establishes goals and associated time frames for completing detailed reviews of aircraft and aircraft components and transitioning to future aircraft. It also provides strategies for accomplishing key tasks such as scheduling inspections, as well as performance measures and targets. (See table 1.) The Marine Corps is taking steps to develop a strategy addressing the reset of ground equipment used in Afghanistan; however, the timeline for completing and issuing this strategy is uncertain. Although Marine Corps officials agreed that a reset strategy for ground equipment will be needed, they stated that they do not plan to issue a strategy until there is a better understanding of the dates for initial and final drawdown of forces from Afghanistan. While more specific and certain drawdown information is desirable and will be needed to firm-up reset plans, the President stated that troops would begin to withdraw in July 2011, working towards a complete transfer of all security operations to Afghan National Security Forces by 2014. The current dates announced by the President are the best available for the purposes of contingency planning and provide a reasonable basis for developing a timeline to complete its reset strategy. In the meantime, Marine Corps officials are taking the following steps toward developing a reset strategy:  First, the Marine Corps completed a force structure review in early 2011 that is aimed at ensuring the service is properly configured. The force structure review included a determination of equipment reset requirements to support the post-Afghanistan Marine Corps force structure.  Second, the Marine Corps is currently developing an implementation plan based on the results of the force structure review. A goal of the force structure implementation plan is to ensure that the Marine Corps achieves a restructured force by the time the reset of equipment used in Afghanistan is complete. The focus of this implementation plan is the establishment of the mission-essential tasks and the development of refined tables of equipment in support of those tasks. These refined tables of equipment will determine what equipment the Marine Corps will reset and how the equipment will be reintegrated into nondeployed Marine Corps forces. The Marine Corps plans to issue this force structure implementation plan in summer 2011.  Third, following issuance of the force structure implementation plan, the Marine Corps plans to develop and issue formal reset planning guidance that informs operating force units and the Marine Corps Logistics Command what equipment they will receive and be responsible for resetting. Specifically, Marine Corps officials stated that the planning guidance is intended to allow Marine Forces Commands, Marine Expeditionary Forces, and Marine Corps Logistics Command to assess their reset maintenance capacity requirements and identify additional support requirements beyond the maintenance centers’ capacity. The officials indicated that the planning guidance would serve as a precursor to a comprehensive reset strategy. Although the Marine Corps has laid out several steps toward developing its ground equipment reset strategy, it has not specified timelines for completing and issuing either the formal reset planning guidance or its reset strategy or indicated how it plans to take into consideration the current dates announced by the President for withdrawal in its reset strategy for Afghanistan. The reset strategy is necessary to help ensure that life-cycle management governance is provided to key organizations responsible for executing reset, such as the Marine Corps Logistics Command. Until the reset strategy is issued, establishing firm plans for reset may be difficult for the Marine Corps Logistics Command to effectively manage the rotation of equipment to units to sustain combat operations or meet the equipment needs of a newly defined post- Afghanistan Marine Corps force structure. In the absence of a reset strategy, Marine Corps Logistics Command officials told us they cannot issue its supporting order which enables its maintenance centers to effectively begin planning for and phasing in a new maintenance workload. It is also uncertain to what extent the Marine Corps plans to align its ground equipment reset strategy with its ground equipment modernization plan. The ground equipment modernization plan is used annually to develop future warfighting capabilities to meet national security objectives. Following the plan guides the Marine Corps in the identification, development, and integration of warfighting and associated support and infrastructure capabilities. Marine Corps officials have stated that they plan to establish a link between the reset strategy for Afghanistan and the ground modernization plan. As a basis for evaluating current reset planning for ground equipment used in Afghanistan, we also reviewed both the aviation reset strategy for Afghanistan and the ground equipment reset strategy that the Marine Corps developed for Iraq. We found that the aviation reset strategy was directly linked to the aviation equipment modernization plan. For example, the aviation equipment modernization plan outlines the transition for the UH-1N Marine Light Attack Helicopter to the UH-1Y, which should be fully phased in by fiscal year 2015. As part of the reset strategy for the UH-1Y, reset requirements for the maintenance centers associated with this transition have been identified. In contrast, we found that the Iraq reset strategy for ground equipment contained no direct reference to the service’s equipment modernization plans. Marine Corps officials stated that it was unnecessary to include a direct reference to the equipment modernization plan in its Iraq reset strategy because they are indirectly linked through the roles and responsibilities for the Deputy Commandant, Combat Development and Integration. Specifically, the officials noted that the Iraq reset strategy contains a section outlining these roles and responsibilities and that these same roles and responsibilities are included in the Expeditionary Force Development System instruction. However, this indirect linkage does not provide a clear relationship between reset and modernization. A clear alignment of the ground equipment reset strategy for Afghanistan and modernization plan would help to ensure that the identification, development, and integration of warfighting capabilities also factor in equipment reset strategies so that equipment planned for modernization is not unnecessarily repaired. Without a Marine Corps reset strategy for ground equipment used in operations in Afghanistan that includes clear linkages to the modernization plan, the Marine Corps may not be able to effectively plan and execute ground equipment reset in the most efficient manner. The total costs of reset estimated by the Marine Corps may not be accurate or consistent because of differing definitions of reset that have been used for aviation and ground equipment. These differing definitions exist because DOD has not established a single standard definition for use in DOD’s budget process. Specifically, the Marine Corps does not include aviation equipment procurement costs when estimating total reset costs. According to Marine Corps officials, procurement costs are excluded because such costs are not consistent with its definition of aviation equipment reset. Additionally, Marine Corps officials stated that the definition of reset for aviation equipment is to maintain, preserve, and enhance the capability of aircraft through maintenance activities. This definition, according to Marine Corps officials, does not include procurement funding for the replacement of aviation equipment losses in theater. In contrast, the Marine Corps’ definition of reset for ground equipment includes procurement costs to replace theater losses. Reset for all types of equipment as defined by other services (e.g., the Army) also includes procurement costs. Although the Marine Corps excludes procurement costs when estimating aviation equipment reset costs, we found that the Director of Cost Assessment and Program Evaluation had obtained a procurement cost estimate for Marine Corps aviation equipment as part of its efforts to track reset costs for the department. DOD’s Resource Management Decision 700 tasks the Director of Cost Assessment and Program Evaluation with providing annual departmentwide reset updates that (1) outline current- year reset funding needs, (2) assess the multiyear reset liability based on plans for equipment redeployment, and (3) detail deferred reset funding actions. Based on this tasking, the Marine Corps provided total reset costs that included procurement costs for equipment replacement, as well as maintenance costs, for both ground and aviation equipment. The update showed that total reset costs for Marine Corps aviation equipment was approximately $1.8 billion for fiscal years 2010 through 2012, which includes $1.4 billion for procurement costs. These reported costs were included in the 2010 DOD Reset Planning Projections annual update prepared by the Director of Cost Assessment and Program Evaluation. We were not able to determine the reasons for this apparent inconsistency between what the Marine Corps considers to be valid aviation equipment reset costs (i.e., excludes procurement costs) and what was reported in the 2010 DOD Reset Planning Projections annual update (i.e., includes procurement costs). Navy and Marine Corps officials stated that they were unable to identify any official from the Navy or Marine Corps as the source for providing or producing this total reset cost data for Marine Corp aviation equipment. Therefore, we could not assess the basis for the reported aviation equipment reset costs to determine their accuracy. DOD’s Resource Management Decision 700 also directed the DOD Comptroller to publish a DOD definition of reset for use in the DOD overseas contingency operations budgeting process. DOD’s definition of reset was to be submitted by the Comptroller to the Deputy Secretary of Defense for approval by January 15, 2010, well ahead of the Marine Corps’ initial submission of its total reset liability, which was due by June 1, 2010. However, a single standard definition of reset for budget purposes has not yet been issued to the services. We also found that the Marine Corps’ definition of aviation reset differs from the definition of reset provided for use in congressional testimony in a January 2007 memorandum from the Deputy Under Secretary of Defense for Logistics and Materiel Readiness to the under secretaries of the military departments. That memorandum states that reset encompasses maintenance and supply activities that restore and enhance combat capability to units and prepositioned equipment that was destroyed, damaged, stressed, or worn-out beyond economic repair due to combat operations by repairing, rebuilding, or procuring replacement equipment. According to the memorandum, the Office of the Secretary of Defense and the services agreed to this definition of reset; the memorandum emphasizes that it is important that all DOD military departments are consistent in the definition of the terms during congressional testimony. Without a single standard definition for reset for the services to use, the Marine Corps may continue to report its total reset costs for aviation equipment inconsistently. Furthermore, data integrity issues will make it challenging to identify program funding trends within the Marine Corps and among the services for equipment reset. Without accurate reporting of total reset costs for aviation equipment, the level of reset funding the Marine Corps needs to sustain future operations may not be properly communicated to Congress beyond what has been requested for overseas contingency operations. Furthermore, the Office of the Under Secretary of Defense Comptroller, Director of Cost Assessment and Program Evaluation, and OMB may not have the most reliable aviation equipment reset data for their review and oversight of the Marine Corps’ overseas contingency operations budget requests. With the increased demands current operations have placed on Marine Corps equipment, and at a time when the federal government is facing long-term fiscal challenges, it is important for the Marine Corps to have a reset strategy in place for both ground and aviation equipment used in operations in Afghanistan as well as a standard DOD definition for reset. Reset strategies provide a framework that allows Marine Corps officials to adequately plan, budget, and execute the reset of equipment used in operations in Afghanistan. The reset strategy, and the timing thereof, could be modified if U.S. drawdown plans subsequently change or should the Marine Corps receive more specific and certain drawdown information. However, without specified timelines for completing and issuing either formal reset planning guidance or its reset strategy that also take into consideration the current dates announced by the President for withdrawal—which are the best available for the purposes of contingency planning—the Marine Corps may be unable to effectively manage the rotation of equipment to units to sustain combat operations, or meet the equipment needs of a newly defined post-Afghanistan Marine Corps force structure. Additionally, without a Marine Corps reset strategy for ground equipment used in operations in Afghanistan that includes clear linkages to the modernization plan, the Marine Corps may not be able to effectively plan and execute ground equipment reset in the most efficient manner. Furthermore, the total reset costs provide information that allows the Marine Corps to more efficiently plan and make informed budget decisions and allows Office of the Under Secretary of Defense (Comptroller) and OMB to have oversight. Until DOD establishes a single standard definition for reset for the services to use, DOD and Congress may have limited visibility over the total reset costs for the services. Accurate reporting of total reset costs for aviation equipment would provide Congress with the level of funding the Marine Corps needs to reset all equipment used in operations in Afghanistan at the conclusion of operations. Furthermore, the Office of the Under Secretary of Defense for the Comptroller and for Cost Assessment and Program Evaluation and OMB may lack the visibility needed over the aviation reset funds in their review and oversight of the Marine Corps overseas contingency operations budget requests. To improve the Marine Corps’ ability to plan, budget for, and execute the reset of ground equipment used in Afghanistan, we recommend that the Secretary of Defense direct the Commandant of the Marine Corps to take the following two actions:  Establish a timeline for completing and issuing formal reset planning guidance and a ground equipment reset strategy for equipment used in Afghanistan that allows operating force units and the Marine Corps Logistics Command to effectively manage equipment reset.  Provide linkages between the ground equipment reset strategy for equipment used in Afghanistan and equipment modernization plans, including the Expeditionary Force Development System and the annual Program Objective Memorandum Marine Air-Ground Task Force Requirements List. To improve oversight and ensure consistency in the reporting of total reset costs, we recommend that the Secretary of Defense direct the Office of the Under Secretary of Defense (Comptroller), in coordination with the Office of the Under Secretary of Defense for Cost Assessment and Program Evaluation, the Office of the Under Secretary of Defense for Acquisitions, Technology and Logistics, the services, and the Joint Staff to act on the tasking in the Resource Management Decision 700 to develop and publish a DOD definition of reset for use in the DOD overseas contingency operations budgeting process. In written comments on a draft of this report, DOD concurred with one of our recommendations and partially concurred with the other two recommendations and provided information on the steps it is taking or plans to take to address them. DOD partially concurred with our recommendation that the Secretary of Defense direct the Commandant of the Marine Corps to establish a timeline for completing and issuing formal reset planning guidance and a ground equipment reset strategy for equipment used in Afghanistan that allows operating force units and the Marine Corps Logistics Command to effectively manage equipment reset. DOD commented that guidance for resetting the force is being developed in its Operation Enduring Freedom Reset Plan, the Operation Enduring Freedom Reset Playbook, and the Marine Air Ground Task Force Integration Plan. However, during the course of our review, the development of a strategy for ground equipment in Afghanistan was in the beginning stages and the Marine Corps did not discuss or provide details regarding the three documents now cited as its guidance for resetting the force. DOD added that the Marine Corps has established a timeline/estimated date of April 30, 2012, for completing and issuing format reset planning guidance and a ground equipment reset strategy for equipment used in Afghanistan. While the Marine Corps has provided DOD with a date for completing and issuing this guidance, the Marine Corps does not appear to have established a sequenced timeline, as we recommended. Specifically, DOD’s response has both the formal reset planning guidance and the ground equipment reset strategy being issued at the same time. Marine Corps officials stated that the formal reset planning guidance is intended to serve as a precursor to a comprehensive reset strategy that will allow Marine Forces Commands, Marine Expeditionary Forces, and Marine Corps Logistics Command to assess their reset maintenance capacity requirements and identify additional support requirements beyond the maintenance centers’ capacity. We believe this guidance will not be useful if it is not issued sufficiently ahead of time to guide the development of the ground equipment reset strategy. Consequently, we disagree with DOD’s statement that the Marine Corps does not need further direction to establish a timeline for completing and issuing formal reset planning guidance and a ground equipment reset strategy for equipment used in Afghanistan. DOD partially concurred with our recommendation that the Secretary of Defense direct the Commandant of the Marine Corps to provide linkages between the ground equipment reset strategy for equipment used in Afghanistan and equipment modernization plans, including the Expeditionary Force Development System and the annual Program Objective Memorandum Marine Air-Ground Task Force Requirements List. DOD commented that it recognizes the importance of providing a linkage between ground equipment reset strategies and equipment modernization plans. Specifically, DOD commented that the Marine Corps plans to outline these linkages in their Operation Enduring Freedom Reset Plan, the Operation Enduring Freedom Reset Playbook, and the Marine Air Ground Task Force Integration Plan, which are currently being developed. While, as previously mentioned, the Marine Corps did not provide specific details regarding the three documents cited above during the course of our review, we believe that including this linkage in these documents would be responsive to our recommendation and will allow the Marine Corps to more effectively and efficiently plan and execute ground equipment reset. DOD concurred with our recommendation that the Secretary of Defense direct the Office of the Under Secretary of Defense (Comptroller), in coordination with the Office of the Under Secretary of Defense for Cost Assessment and Program Evaluation, the Office of the Under Secretary of Defense for Acquisitions, Technology and Logistics, the services, and the Joint Staff to act on the tasking in the Resource Management Decision 700 to develop and publish a DOD definition of reset for use in the DOD overseas contingency operations budgeting process. DOD commented that it is developing a definition of reset for use in the overseas contingencies operations budgeting process that will be incorporated into the DOD Financial Management Regulation. However, during the course of our review DOD had not yet taken action to develop a reset definition, which was to have been submitted by the Comptroller to the Deputy Secretary of Defense for approval by January 15, 2010. In addition, DOD commented that in the interim the department is using specific criteria provided by OMB guidance for determining the reset requirements that are overseas contingency operations or base. While OMB has provided guidance for overseas contingency operations budget requests, this guidance does not provide specific direction concerning what constitutes reset. Consequently, DOD recognizes the need for a common definition of equipment reset for budget purposes, but has not met its goal of establishing one. Resource Management Decision 700 established a January 2010 date for approving a common reset definition, and that definition could have been used in developing the department’s fiscal year 2012 budget submission. DOD is now developing its fiscal year 2013 budget submission without the benefit of a common definition. Therefore, we disagree with DOD’s statement that additional and separate guidance from the Secretary of Defense is not necessary, and believe that additional direction is needed to emphasize that the Under Secretary of Defense (Comptroller), in coordination with the Office of the Under Secretary of Defense for Cost Assessment and Program Evaluation, the Office of the Under Secretary of Defense for Acquisitions, Technology and Logistics, the services, and the Joint Staff should expedite the development and publication of a DOD definition of reset for use in the DOD overseas contingency operations budgeting process. The department’s comments are reprinted in appendix III. We are sending copies of this report to appropriate congressional committees, the Secretary of Defense, and appropriate DOD organizations. In addition, this report will be available at no charge on our website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8365 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To determine the extent to which the Marine Corps has a strategy in place to manage the reset of ground and aviation equipment used in operations in Afghanistan, we obtained and reviewed the Marine Corps reset strategies for ground and aviation equipment used in operations in Afghanistan. Where strategies had not yet been developed, we collected information regarding ongoing reset planning efforts from Marine Corps officials and discussed with them the process used and the factors considered when developing a reset strategy. As a basis for assessing current reset planning efforts for Afghanistan, we also reviewed the reset strategy that the Marine Corps prepared for equipment used in Iraq. We collected written responses and supporting documentation to our inquiries and data requests from Marine Corps officials related to ground and aviation equipment reset strategies. We also discussed with Marine Corps officials the process used and the factors considered when developing these reset strategies. Additionally, we discussed the reset strategies with Marine Corps officials to determine the roles and responsibilities of the maintenance and fleet readiness centers in preparing for equipment requiring reset and determining the appropriate reset strategy. To determine the extent to which the Marine Corps has developed effective reset strategies for the reset of equipment used in operations in Afghanistan that address the key elements of a comprehensive, results- oriented strategic planning framework, we reviewed and analyzed the ground and aviation equipment reset strategies and supporting guidance documents. Specifically, we analyzed the reset strategies and supporting guidance documents to determine if they included the six key elements of a strategic planning framework. In performing our analysis, we reviewed the strategies to determine if they included, partially included, or did not include each of the six key elements. Through our assessment we determined the guidance documents in addition to the aviation equipment reset strategy that comprises the Marine Corps strategic plan for reset. In addition, to understand the extent to which the Marine Corps aligns its modernization plans with its reset strategies, we interviewed Marine Corps officials to discuss the plans used for modernization and discussed the process for how these plans are incorporated with the strategies for equipment reset. To assess the Marine Corps’ estimates of total reset costs, we obtained and reviewed the Department of Defense’s (DOD) Resource Management Decision 700—separate from the budget formulation guidance—tasking the services to provide annual reset cost updates, and the Marine Corps processes for determining total reset costs for ground and aviation equipment. We collected written responses to our inquiries and data requests from Marine Corps officials about the system they use to determine total reset costs for ground and aviation equipment used in operations in Afghanistan. In addition, we interviewed Marine Corps officials to obtain any information relevant to the system they use to determine total reset costs for equipment used in operations in Afghanistan. To better understand the Marine Corps reset funding needs for ground and aviation equipment, we requested reset budget data for fiscal year 2009 through fiscal year 2012. We reviewed the budget data obtained and met with Marine Corps officials to discuss the data to ensure that we had a correct understanding of the different budget categories, such as procurement and operations and maintenance. We then analyzed the Marine Corps’ reset budgets from fiscal year 2009 through fiscal year 2010 for the reset of ground and aviation equipment to identify any trends in the operations and maintenance and procurement funding categories. We discussed the results of our analysis with Marine Corps officials to determine the rationale for any trends in the funding. We interviewed Office of the Secretary of Defense, Department of the Navy, and Marine Corps officials to obtain information and any guidance documents pertaining to the process used for budget development and budget review and approval. To gain a better understanding of how the Marine Corps is using procurement funding, we reviewed the Marine Corps procurement reset funding appropriated for ground equipment in fiscal year 2010 for the 10 items that had the highest amount of funding. To determine the reliability of the reset budget data provided for ground equipment from the Global War on Terror Resources Information Database by Marine Corps officials, we assessed the data reliability of the budget data by obtaining and reviewing agency officials’ responses on the data reliability questionnaires provided. Based on our review of the Office of the Secretary of Defense and Marine Corps officials’ responses to our data reliability questionnaire, we identified any possible limitations and determined the effect, if any, those limitations would have on our findings. We also spoke with agency officials to clarify how the budget data were used and to ensure that we had a good understanding of how to interpret the data for our purposes. We also reviewed the fiscal year 2009 through fiscal year 2012 reset budget data provided to make sure that the formulas in the database were accurate for the data we planned to use. Based on all of these actions, we did not find any areas of concern with the data and we determined that the data used from the Global War on Terror Resources Information Database were sufficiently reliable for our purposes. To determine the reliability of the reset budget data provided for aviation equipment from the Program Budget Information System, Navy Enterprise Resource Planning system, and the Justification Management System by Navy and Marine Corps officials, we assessed the data reliability of the budget data by obtaining and reviewing agency officials’ responses on the data reliability questionnaires provided. Based on our review of Navy and Marine Corps officials’ responses to our data reliability questionnaire, we identified any possible limitations and determined the effect, if any, those limitations would have on our findings. We also spoke with agency officials to clarify how the budget data were used and to ensure that we had a good understanding of how to interpret the data for our purposes. Based on all of these actions, we did not find any areas of concern with the data and we determined that the data used from the Program Budget Information System, Navy Enterprise Resource Planning system, and the Justification Management System were sufficiently reliable for our purposes. To address each of our objectives, we also spoke with officials, and obtained documentation when applicable, at the following locations:  Office of the Under Secretary of Defense for Acquisitions, Technology and Logistics, Assistant Director of Defense for Material Readiness  Office of the Secretary of Defense for Cost Assessment and Program  Office of the Under Secretary of Defense (Comptroller)  Assistant Secretary of the Navy, Financial Management and Comptroller; Navy Financial Management Branch  Naval Air Systems Command Reset Project Office  Naval Air Systems Command Comptroller Office  Naval Air Systems Command Naval Aviation Enterprise War Council  Headquarters Marine Corps Deputy Commandant for Installations and  Headquarters Marine Corps Deputy Commandant for Plans, Policies,  Headquarters Marine Corps Deputy Commandant for Marine Corps  Headquarters Marine Corps Deputy Commandant for Programs and  Headquarters Marine Corps Deputy Commandant, Aviation  Marine Corps Systems Command  Marine Corps Logistics Command We conducted this performance audit from November 2010 through August 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides further details on funding for Marine Corps equipment reset for fiscal years (FY) 2009 to 2012. Tables 2 and 3 provide a summary of funds that were budgeted or requested to reset ground and aviation equipment. The Marine Corps’ top 10 ground equipment reset procurement items totaled approximately $365 million and accounted for approximately 90 percent of their total reset procurement funding in fiscal year 2010. Table 4 provides a summary of the procurement reset funding budgeted for these ground equipment items. In addition to the contact named above, Larry Junek, Assistant Director; Tamiya Lunsford; Stephanie Moriarty; Cynthia Saunders; John Van Schaik; Michael Willems; Monique Williams; and Erik Wilkins-McKee; Tom Gosling; William Graveline; Asif Khan; Thomas McCool; Charles Perdue; Gregory Pugnetti; and William Woods made key contributions to this report.
The U.S. Marine Corps received approximately $16 billion in appropriated funds between fiscal years 2006 and 2010 for reset of aviation and ground equipment that has been degraded, damaged, and destroyed during oversees contingency operations. Reset encompasses activities for repairing, upgrading, or replacing equipment used in contingency operations. The Marine Corps continues to request funding to reset equipment used in Afghanistan. GAO initiated this review under its authority to address significant issues of broad interest to the Congress. GAO's objectives were to evaluate the extent to which the Marine Corps has made progress toward (1) developing effective reset strategies for both aviation and ground equipment used in Afghanistan and (2) providing accurate estimates of total reset costs. The Marine Corps has developed a strategic plan that addresses the reset of aviation equipment used in operations in Afghanistan and includes the elements of a comprehensive, results-oriented strategic planning framework. However, a reset strategy for ground equipment has not yet been developed. The Marine Corps is taking steps to develop such a strategy; however, the timeline for completing and issuing this strategy is uncertain. Although Marine Corps officials agreed that a reset strategy for ground equipment will be needed, they stated that they do not plan to issue a strategy until there is a better understanding of the dates for drawdown of forces from Afghanistan. While more specific drawdown information is desirable and will be needed to firm up reset plans, the President stated that troops would begin to withdraw in July 2011, working towards a transfer of all security operations to Afghan National Security Forces by 2014. Until the ground equipment reset strategy is issued, establishing firm plans for reset may be difficult for the Marine Corps Logistics Command to effectively manage the rotation of equipment to units to sustain combat operations. It is also uncertain to what extent the Marine Corps plans to align its ground equipment reset strategy with its ground equipment modernization plan. GAO found that the Iraq reset strategy for ground equipment contained no direct reference to the service's equipment modernization plans, leaving unclear the relationship between reset and modernization. A clear alignment of the ground equipment reset strategy for Afghanistan and modernization plans would help to ensure that the identification, development, and integration of warfighting capabilities also factor in equipment reset strategies so that equipment planned for modernization is not unnecessarily repaired. The total costs of reset estimated by the Marine Corps may not be accurate or consistent because of differing definitions of reset that have been used for aviation and ground equipment. These differing definitions exist because Department of Defense (DOD) has not established a single standard definition for use in DOD's budget process. Specifically, the Marine Corps does not include aviation equipment procurement costs when estimating total reset costs. According to Marine Corps officials, procurement costs are excluded because such costs are not consistent with its definition of aviation equipment reset. In contrast, the Marine Corps' definition of reset for ground equipment includes procurement costs to replace theater losses. However, GAO found that the Office of the Secretary of Defense Director of Cost Assessment and Program Evaluation had obtained a procurement cost estimate for Marine Corps aviation equipment as part of its efforts to track reset costs for the department. DOD's Resource Management Decision 700 tasks the Office of the Secretary of Defense Director of Cost Assessment and Program Evaluation to provide annual departmentwide reset updates. GAO recommends that the Secretary of Defense (1) establish a timeline for issuing formal reset planning guidance and a ground equipment reset strategy for equipment used in operations in Afghanistan, (2) provide linkages between the ground equipment reset strategy and the modernization plan, and (3) develop and publish a DOD definition of reset for use in the DOD overseas contingency operations budgeting process. DOD concurred with one and partially concurred with two of the recommendations.
The Army has 10 active duty divisions, as listed in appendix II. Six of these divisions are called heavy divisions because they are equipped with large numbers of tanks, called armor. Two other divisions are called light divisions because they have no armor. The remaining two divisions are an airborne division and an air assault division. Heavy divisions accounted for the majority of the Army’s division training funds, about 70 percent ($808 million) in fiscal year 2000, and these divisions are the focus of this report. The Army uses a building block approach to train its armor forces— beginning with individual training and building up to brigade-sized unit training, as shown in figure 1. This training approach is documented in the Army’s Combined Arms Training Strategy (CATS). The strategy identifies the critical tasks, called mission essential tasks, that units need to be capable of performing in time of war and the type of events or exercises and the frequency with which the units train to the task to produce a combat ready force. The strategy, in turn, guides the development of unit training plans. The Army uses CATS as the basis for determining its training budget. To do this, it uses models to convert training events into budgetary resources, as shown in figure 2. For armor units, the Battalion Level Training Model translates the type of training events identified in CATS and the frequency with which they should be conducted into the number of tank miles to be driven in conducting those training events. The Army then uses another model, the Training Resource Model, to compute the estimated training cost for units based on the previous 3 years’ cost experience. The output from these models is the basis for the Army’s training budget. CATS, in combination with the Battalion Level Training Model, has established that the tanks in armor units will be driven, on average, about 800 miles each year for home station training. This is the level of training the Army has identified as needed to have a combat ready force, and its budget request states that it includes funds necessary to support that training. While the Army uses the 800-tank mile goal as a tool to develop its divisions’ home station budgets, it does not identify the number of tank miles to be driven in its training guidance and training field manuals as a training requirement nor does it mention the miles in unit training plans. To measure the readiness of its units, the Army uses the Global Status of Resources and Training System. Unit commanders use this readiness system to report their units’ overall readiness level. Under this readiness system, each reporting unit provides information monthly on the current level of personnel, equipment on hand, equipment serviceability, and training, and the commander’s overall assessment of the unit’s readiness to undertake its wartime mission. Units can be rated on a scale of C-1 to C-5. A C-1 unit can undertake the full wartime mission for which it is organized and designed; a C-2 unit can undertake most of its wartime mission; a C-3 unit can undertake many but not all elements of its wartime mission; a C-4 unit requires additional resources or training to undertake its wartime mission; and a C-5 unit is not prepared to undertake its wartime mission. Currently, the training readiness portion of the readiness report reflects the commander’s assessment of the number of training days that are needed for the unit to be fully combat ready. In addition to the Army setting a training goal of 800 miles for tanks located at unit home stations, in its performance report for fiscal year 1999, DOD began to use 800 tank training miles, including miles driven at units’ home station and the National Training Center, as a performance benchmark for measuring near-term readiness in responding to the Government Performance and Results Act. This act is a key component of a statutory framework that Congress put in place during the 1990s to promote a new focus on results. The Army is continuing to move training funds planned for its tank divisions to other purposes. Budget requests should reflect the funds needed to conduct an organization’s activities and its spending priorities. The Army’s budget request for tank division training includes funding needed to conduct 800 miles of unit home station tank training. However, each year since at least the mid-1990s, the Army has obligated millions of dollars less than it budgets to conduct training, and tanks have not trained to the 800-mile level. For the 4-year period fiscal years 1997 through 2000, the Army obligated a total of almost $1 billion less than Congress provided for training all its divisions. At the same time, the Army trained on its tanks an annual average of 591 miles at home station. Beginning with fiscal year 2001, the Army is taking action to restrict moving tank training funds. Each fiscal year the Army develops a budget request to fund, among other activities and programs, the operation of its land forces. The largest component of the land forces budget is for training the Army’s 10 active- duty divisions. The Army, through the President’s budget submission, requests more than $1 billion annually in O&M funds to conduct home station division training. The majority of this budget request is for the Army’s six heavy divisions to use for unit training purposes. Over the last 4 years, Congress has provided the Army with the training funds it has requested. For much of the past decade, the Army has moved some of these funds from its division training to other purposes, such as base operations and real property maintenance. We previously reported that this occurred in fiscal years 1993 and 1994 and our current work shows that the Army continues to move training funds to other purposes. Although the Army has moved funds from all of its land forces subactivities, as shown in table 1, for the 4-year period fiscal years 1997 through 2000, it moved the most funds from its subactivity planned for division training. Although the Army has moved the most funds out of its division training subactivity, the amount moved has decreased over the past 2 years, as shown in figure 3. Despite the recent decrease in training funds moved from the divisions, the Army moved almost $190 million in fiscal year 2000. Most of the training funds moved occurred within the Army’s six heavy divisions. As shown in table 2, $117.7 million of the $189.7 million in division funds that were moved in fiscal year 2000 occurred in the heavy divisions. Although O&M funds cannot generally be traced dollar for dollar to their ultimate disposition, an analysis of funds obligated compared to the funds conferees’ initially designated shows which subactivities within the Army’s O&M account had their funding increased or decreased during the budget year. Generally, the Army obligated funds planned for training its divisions for other purposes such as base operations, real property maintenance, and operational readiness (such as maintaining its training ranges). Although the Army budgets to achieve 800 tank miles for home station training, it has consistently achieved less than the 800 training miles for the last 4 years (see fig. 4). During this period, armor units missed the 800-tank mile goal annually by about an average of 26 percent. Recently, however, the number of home station tank miles achieved increased, from 568 miles in fiscal year 1999 to 655 miles in fiscal year 2000. There are some valid reasons for not achieving the 800-tank mile goal at home station, which are described below. The Army, however, does not adjust its tank mile goal to reflect these reasons. The Army develops its data on tank mile achievement from each unit’s tank odometer readings. Some home station training, however, does not involve driving tanks. Specifically, the 800-tank mile goal for home station training includes a 60 tank mile increment that some units can conduct through the use of training simulators. These 60 miles are included in the funding for the 800-tank miles, but they are not reflected in tank mile reporting because they are not driven on real tanks. In addition, deployment to contingency operations, such as the ones in the Balkans (Bosnia and Kosovo), affects both the available funding and the amount of training that can be conducted at home station. For example, when armor units are deployed to the Balkans they are not able to conduct their normal home station training. During fiscal year 1999, for example, the 1st Cavalry Division deployed to the Balkans for 11 months. Consequently, the division did very little home station training, which affected the Army-wide average home station tank training miles achieved for that year—specifically, an average of 568 tank training miles. However, if the Army had excluded the 1st Cavalry Division because it was deployed to the Balkans for most of that fiscal year, the Army-wide average home station tank mile training would have increased to 655 miles, nearly 90 miles more. In addition, the Army moved and used the funds associated with this missed training to offset the cost of Balkan operations. Although the magnitude of funding shifted to support contingency operations varies annually, the Army does not adjust its methodology and reporting to reflect the tank training miles associated with these cost offsets. Even though the Army is not conducting 800 tank miles of home station training, its armor units are still able to execute their unit training events. During our work at five of the Army’s six heavy divisions, we found no evidence to demonstrate that scheduled training events had been delayed or canceled in recent years because training funds were moved out of the division subactivity to other purposes. Training events included those at a unit’s home station and at the Army’s National Training Center and its Combat Maneuver Training Center. Unit trainers told us that if scheduled training had to be canceled or delayed, it likely would be for reasons such as deployments or bad weather. In addition, when unit trainers establish their training plans for certain training events, they focus on achieving the unit’s mission essential tasks and not on how many miles will be driven on the tanks. According to the Army, units can execute their training plans despite funds being moved for several reasons. The major reasons were because most of the movement in funds occurs before the divisions receive the funds, division trainers, using past experience, anticipate the amount of training funds they will likely receive from higher commands and adjust their training plans accordingly and the intensity of the training event can be modified to fit within available funding by taking steps such as driving fewer miles and transporting— rather than driving—tanks to training ranges. In fiscal year 2001, the Army implemented an initiative to protect training funds from being moved that should result in the Army’s using these training dollars for the purposes originally planned. Senior Army leadership directed that for fiscal year 2001, Army land forces would be exempt from any budget adjustments within the discretion of Army headquarters. The senior leadership also required that Army commands obtain prior approval from Army headquarters before reducing training funds. However, subactivities within the Army’s O&M account that have received these funds in the past—such as real property maintenance, base operations, and operational readiness—may be affected by less funding unless the Army requests more funds for these subactivities in the future. At the time of our work, this initiative had been in effect for only a few months; thus, we believe it is too early to assess its success in restricting the movement of training funds. Army readiness assessments reported in the Global Status of Resources and Training System show that for the last 4 fiscal years, armor units have consistently reported high levels of readiness, despite reduced training funding and not achieving its tank mile goals. This readiness assessment system does not require considering tank miles driven as an explicit factor when a unit commander determines the unit’s training or overall readiness posture. In fact, the number of tank miles driven is not mentioned in readiness reporting regulations. We analyzed monthly Global Status of Resources and Training System data to see how often active-duty Army armor units were reporting readiness at high levels and lower levels. Our analysis showed that most armor units reported high overall readiness for fiscal years 1997 through 2000. In our analysis of monthly readiness reports for fiscal years 1997 through 2000, we found that when armor units reported lower overall readiness the reason was usually personnel readiness. In reviewing comments of commanders who reported degraded readiness for the same period, we found that insufficient funding was rarely cited as a cause of degraded readiness. Only a handful of unit reports filed in the 4-year period covering fiscal years 1997 through 2000, identified instances in which a shortage of funds was cited as a factor in reporting lower readiness levels. During the same period, when commanders cited training as the reason for reporting lower overall readiness, they never cited insufficient funding as a cause. Not only do unit commanders report on their overall readiness levels, but they also are required to report on the four subareas that comprise overall readiness. These subareas are current readiness levels of personnel, equipment on hand, equipment serviceability, and training. For the training readiness component, a unit’s training status rating is based upon a commander’s estimate of the number of training days required for the unit to become proficient in its wartime mission. Our analysis of these readiness reports showed that most armor units reported that their training status was high throughout fiscal years 1997 through 2000. There seems to be no direct relationship between average tank miles achieved and reported training readiness. There were times when tank miles achieved (1) increased while the proportion of time units reporting high readiness levels declined and (2) declined while the proportion of units reporting high readiness levels increased. For example, tank miles achieved rose more than 25 percent between the second and third quarter of fiscal year 2000 while the proportion of time units were reporting high readiness levels declined. Conversely, tank miles achieved fell by more than 20 percent between the third and fourth quarter of fiscal year 1999 while the proportion of time units were reporting high readiness levels increased. Both the Army and DOD provide Congress with information on tank miles achieved, but reporting is incomplete and inconsistent. The Army reports tank miles achieved to Congress as part of DOD’s annual budget documentation. DOD reports tank miles achieved as part of its reporting under the Government Performance and Results Act. Army units train on their tanks at their home stations, at major training centers, and in Kuwait in concert with Kuwait’s military forces. All armor training contributes to the Army’s goal of having a trained and ready combat force. However, we found that the categories of tank training the Army includes in its annual budget documentation vary from year to year and the categories of training the Army includes in its budget documents and DOD includes in its Results Act reporting vary. In addition to home station training, Army units conduct training away from home station. This additional training is paid from different budget subactivities within the Army’s O&M account and thus is not included in the Army’s budget request for funds to conduct 800 miles of home station training. One such subactivity funds training at the National Training Center. Armor units based in the United States train at the National Training Center on average once every 18 months. Based on congressional guidance, the Army includes funds for this training in a separate budget subactivity. This subactivity, in essence, pays for tank training miles in addition to the 800 miles for home station training that is funded in the divisions’ training subactivity. During the period fiscal years 1997 through 2000, the National Training Center training added an annual average of 87 miles to overall Army tank training in addition to the average of 591 miles of home station training. Because, through fiscal year 2000, these miles have been conducted on prepositioned equipment rather than on a unit’s own tanks, they appropriately have not been included in home station training activity. Beginning in fiscal year 2001, the Army plans to have an as yet undetermined number of units transport their own tanks for use at the National Training Center. As this occurs, these units will report National Training Center tank miles achieved as part of their home station training. The Army is examining how to adjust division and the National Training Center budget subactivities to reflect this shift. Similarly, some armor units conduct training in Kuwait in conjunction with Kuwait’s military forces in a training exercise called Desert Spring (formerly called Intrinsic Action). Kuwait pays part of the cost of this training and the balance is paid from funds appropriated for contingency operations. The tanks used for this training are prepositioned in Kuwait. Over the last 4 fiscal years, this training added an annual average of about 40 miles to overall Army tank training and was also appropriately not included in the home station training activity. However, this training also contributed to the Army’s goal of having a trained and ready combat force. As shown in figure 5, when the miles associated with additional training are included, for the period fiscal years 1997 through 2000, an average of about 127 miles were added to the annual overall tank-miles achieved. The Army has not been consistent about reporting these miles. We found that in some years the Army included these miles in its reporting on tank miles achieved and in some years it did not. For example, for fiscal year 1999, the latest year for which such data were available, the Army reported only home station tank miles in its budget submission, while for fiscal year 1998 it reported both home station and National Training Center miles. Further, the Army did not include tank miles driven in Kuwait in either year. In fiscal year 1999, DOD began to report on the Army’s achievement of 800 tank miles of training as one of its performance goals under the Government Performance and Results Act. The Results Act seeks to strengthen federal decision-making and accountability by focusing on the results of federal activities and spending. A key expectation is that Congress will gain a clearer understanding of what is being achieved in relation to what is being spent. To accomplish this, the act requires that agencies prepare annual performance plans containing annual performance goals covering the program activities in agencies’ budget requests. The act aims for a closer and clearer link between the process of allocating resources and the expected results to be achieved with those resources. Agency plans that meet these expectations can provide Congress with useful information on the performance consequences of budget decisions. In its Results Act reporting, DOD is using a different training goal than the Army and, depending on the year, is including different categories of training. In response to the Results Act, DOD stated in its fiscal year 1999 performance plan that it planned to use 800 tank miles of training as one of its performance goals for measuring short-term readiness. In DOD’s performance report for 1999, DOD reported, among other performance measures, how well it achieved its training mile goal for tanks. In its reporting on progress toward the goal, DOD included mileage associated with training at the National Training Center in its tank mile reporting. As discussed previously, for the Army, the 800-tank mile goal relates exclusively to home station training, and tank miles achieved at the National Training Center are funded through a separate subactivity within the Army’s O&M account and tank miles achieved in Kuwait are paid for in part by Kuwait and in part by funds appropriated for contingency operations. In addition, because the Army has varied the categories of training (home station and National Training Center) it includes in its budget submission reporting, depending on the year, the Army and DOD are sometimes using different bases for their tank mile achievement reporting. As a result, Congress is being provided confusing information about what the 800-tank mile goal represents. Because the Army has consistently (1) not obligated all its O&M unit training funds for the purposes it told Congress that it needed them; (2) continues to conduct its required training events; and (3) reports that its heavy divisions remain trained and in a high state of readiness, questions are raised as to the Army’s proposed use of funds within its O&M account. In addition, the different ways in which the Army and DOD report tank mile training, results in Congress receiving conflicting information. By not providing Congress with clear and consistent information on Army tank training, the usefulness of the information is diminished. To better reflect Army funding needs and more fully portray all its tank training, we recommend that the Secretary of the Army reexamine the Army’s proposed use of funds in its annual O&M budget submission, particularly with regard to the funds identified for division training and for other activities such as base operations and real property maintenance and improve the information contained in the Army’s budget documentation by identifying more clearly the elements discussed in this report, such as (1) all funds associated with tank mile training; (2) the type of training conducted (home station, simulator, and National Training Center); (3) the training that could not be undertaken due to Balkan and any future deployments; (4) the budget subactivities within its O&M account that fund the training; and (5) the training conducted in and paid for in part by Kuwait. To provide Congress with a clearer understanding of tank training, we also recommend that the Secretary of Defense, in concert with the Secretary of the Army, develop consistent tank training performance goals and tank mile reporting for use in Army budget submissions and under the Results Act. DOD provided written comments on a draft of this report, which are reprinted in appendix III. DOD fully agreed with our two recommendations concerning improving the information provided to Congress and in part with our recommendation concerning reexamining its O&M funding request. DOD agreed that the Army should reexamine its funding request in all areas of its O&M budget submission. However, DOD objected to the implication that the Army was requesting too much funding for division training and noted that since we had not assessed the level of division training necessary to meet approved Army standards, any conclusion as to the adequacy of training funds is inappropriate. We did not directly examine whether the Army was training to its approved standards. We did examine whether the Army had delayed or canceled training due to the movement of funds. We found no evidence to demonstrate that scheduled training events had been delayed or canceled in recent years because training funds were moved. We also found that Army unit trainers plan their training events to focus on their mission essential tasks. These tasks form the basis of the Army’s training strategy. While we believe that our findings, including the Army’s movement of almost $1 billion—about 21 percent—of its division training funds to other O&M budget subactivities over the 4-year period fiscal years 1997 through 2000 suggest a need to reexamine the Army’s proposed use of funds within that subactivity, we did not conclude that the Army was requesting too much funding in some areas and not enough in others. As noted above, DOD concurs that the Army should make such a reexamination. We have, however, clarified our recommendation to focus on the need to reexamine the Army’s planned use of funds. We are sending copies of this report to the Secretary of Defense; the Under Secretary of Defense (Comptroller and Chief Financial Officer); the Secretary of the Army; and the Director, Office of Management and Budget. We will make copies available to others on request. If you or your staff have any questions concerning this report please call me on (757) 552-8100. This report was prepared under the direction of Steve Sternlieb, Assistant Director. Major contributors to this report were Howard Deshong, Brenda Farrell, Madelon Savaides, Frank Smith, Leo Sullivan, and Laura Talbott. To identify whether the Army is continuing to move training funds planned for its divisions, we examined Army budget submissions, the Secretary of Defense’s high priority readiness reports to Congress, appropriations acts for the Department of Defense (DOD), and the conference reports on those acts. We focused our analysis on fiscal years 1997 through 2000. We began with fiscal year 1997 because the Army had revised its operation and maintenance (O&M) budget structure for operating forces beginning in that year. We extracted data from these documents to compare the amounts congressional conferees initially designated for the Army’s operation of its land forces, including its divisions, to those the Army reported as obligated. We also obtained Army data on tank miles achieved for the Army overall and by armor battalion. To understand how the Army trains its armor forces to be combat ready as well as to ascertain how Army units adjust to reduced funding and if the Army had canceled or delayed any scheduled training due to the movement of training funds, we obtained briefings, reviewed training documents, and interviewed Army personnel at a variety of locations, including Army headquarters, the Army’s Forces Command and U.S. Army Europe, five of the six heavy divisions both within the United States and Europe, and the Army’s school for armor doctrine and training. We also analyzed tank mile data from the Army’s Cost and Economic Analysis Center. To assess the reported readiness of Army tank units, we examined monthly readiness reporting data from DOD’s Global Status of Resources and Training System for fiscal years 1997 through 2000. We examined both the reported overall readiness and the training component of the readiness reports. We reviewed this system’s readiness status ratings to determine (1) what level of readiness units were reporting, (2) whether unit readiness had declined, (3) whether training readiness was determined to be the primary cause for any decline in readiness, and (4) whether unit commanders had attributed training funding shortfalls as the cause for any decline in readiness levels. To assess whether DOD and the Army are providing Congress with complete and consistent information regarding armor training, we compared Army budget submissions with Army tank training data and DOD’s report on its performance required by the Government Performance and Results Act. We also discussed overall training versus home station training and the differences between Army and Results Act reports with Army officials. Our review was conducted from March 2000 through January 2001 in accordance with generally accepted government auditing standards.
Congress has expressed concern about the extent to which the Department of Defense has moved funds that directly affect military readiness, such as those that finance training, to pay for other subactivities within its operation and maintenance (O&M) account, such as real property maintenance and base operations. This report reviews the (1) Army's obligation of O&M division training funds and (2) readiness of the Army's divisions. GAO found that the Army continued to use division training funds for purposes other than training during fiscal year 2000. However, the reduced funding did not interfere with the Army's planned training events or exercises. The Army's tank units also reported that, despite the reduced funding and their failure to meet their tank mileage performance goal, their readiness remained high. Specifically, many tank units reported that they could be fully trained for their wartime mission within a short time period. Units that reported that they would need more time to become fully trained generally cited personnel issues rather than the lack of training funds as the reason. Even so, starting in fiscal year 2001, the Army has taken action to restrict moving training funds by exempting unit training funds from any Army headquarters' adjustments and requiring prior approval before Army commands move any training funds.
According to INS, the estimated population of undocumented aliens in the United States increased from 3.5 million in 1990 to about 7 million in 2000. Many states that had relatively few undocumented aliens in 1990 experienced rapid growth of this population during the decade. The estimated number of undocumented aliens residing in Georgia, for example, rose from 34,000 in 1990 to 228,000 in 2000. INS estimates indicate that the vast majority of undocumented aliens were concentrated in a few states, with nearly 70 percent from Mexico. Undocumented aliens’ use of medical services has been a long-standing issue for hospitals, particularly among those located along the U.S.- Mexican border. As required by the Emergency Medical Treatment and Active Labor Act (EMTALA), hospitals participating in Medicare must medically screen all persons seeking emergency care and provide the treatment necessary to stabilize those determined to have an emergency condition, regardless of income or immigration status. Two recent studies have reported on hospitals’ provision of care to undocumented aliens, but they were limited in scope. National data sources on health insurance coverage do not report the extent to which undocumented aliens have health insurance or are otherwise able to pay for their medical care. Available data on the broader category of foreign-born noncitizens suggests that a large proportion may be unable to pay for their medical care. A U.S. Census Bureau report indicates that in 2002, more than 40 percent of foreign-born noncitizens residing in the United States, including undocumented and some lawful permanent resident aliens, lacked health insurance. Homeland Security’s Bureau of Customs and Border Protection is responsible for securing the nation’s borders. The bureau’s Border Patrol is responsible for detecting and apprehending persons who attempt to enter illegally between official ports of entry. The bureau’s Office of Field Operations oversees U.S. port-of-entry officials who inspect and determine the admissibility of all individuals seeking to enter the United States at official ports of entry. Both Border Patrol agents and U.S. port-of-entry officials may come into contact with persons needing emergency medical care. For example, Border Patrol agents may encounter persons suffering from severe dehydration or who have been injured in vehicle accidents, and U.S. port-of-entry officials may encounter persons with urgent medical needs, such as burn victims, seeking entry because the closest capable medical facility is in the United States. Border Patrol operations are divided into 21 sectors, but more than 95 percent of Border Patrol apprehensions in 2002 occurred in 9 sectors bordering Mexico. Since the mid-1990s, the Border Patrol has been implementing a strategy to strengthen security and disrupt traditional pathways of illegal immigration along the border with Mexico. As we reported in August 2001, however, one of the strategy’s major effects has been a shift in illegal alien traffic from traditional urban crossing points such as San Diego, California, to harsher, more remote areas of the border. Rather than being deterred from illegal entry, many aliens have instead risked injury and death trying to cross mountains, deserts, and rivers. To reduce the number of undocumented aliens who die or are injured trying to cross the border illegally, INS in 1998 created the Border Safety Initiative, whose focus includes searching for and rescuing those who may have become lost. One element of the initiative is tracking the number of aliens whom Border Patrol agents rescue, a subset of all Border Patrol encounters with sick or injured aliens. U.S. port-of-entry officials inspect and determine the admissibility of persons seeking entry at air, land, and sea ports of entry around the country. Along the U.S.-Mexican border, officials at the 24 land ports of entry, which cover 43 separate crossing points, conducted more than 250 million inspections in fiscal year 2003. The Secretary of Homeland Security may parole—that is, allow temporary access into the United States—an otherwise inadmissible alien for urgent humanitarian reasons, such as treatment for an emergency medical condition. The impact of undocumented aliens on hospitals’ uncompensated care costs remains uncertain. Determining the number of undocumented aliens treated at a hospital is challenging because hospitals generally do not collect information on patients’ immigration status and because undocumented aliens are reluctant to identify themselves. After speaking with experts and hospital administrators, we determined that one potentially feasible method for hospitals to estimate this population is to identify patients without a Social Security number, recognizing that this proxy can over- or underestimate undocumented aliens. We surveyed 503 hospitals in 10 states to collect information on patients without a Social Security number and their effect on hospitals’ uncompensated care levels—that is, uncompensated care costs as a percentage of total hospital expenses. We also included a question in the survey to determine what other methods, if any, hospitals were using to track undocumented aliens to help assess how well patients without a Social Security number served as a proxy for this population. Despite a concerted follow-up effort, we did not receive a sufficient survey response to assess the impact of undocumented aliens on hospitals’ uncompensated care levels or to evaluate the lack of a Social Security number as a proxy for undocumented aliens. (Details on our survey methods and analysis appear in app. I.) Although about 70 percent of hospitals responded to the survey, only 39 percent provided sufficient information to evaluate the relationship between uncompensated care levels and the proportion of care provided to patients without a Social Security number. Of all responding hospitals, fewer than 5 percent reported having a method other than the lack of a Social Security number alone to identify their undocumented alien patients, and the methods used by these hospitals varied. For example, one hospital identified undocumented aliens as those who were both Hispanic and lacked a Social Security number; other hospitals identified undocumented alien patients through foreign addresses or information from patient interviews. Furthermore, the estimates produced by these other methods were inconsistent with those produced by using lack of Social Security number alone. Because we did not receive a sufficient survey response rate and because we were unable to assess the accuracy of the proxy, we could not determine the effect of undocumented aliens on hospital uncompensated care levels. Until better information is available, assessing the relationship between this population and hospitals’ uncompensated care levels will continue to pose methodological challenges. Some federal funding has been available to assist with hospitals’ costs of treating undocumented aliens, but this funding has not covered care of all undocumented aliens or all hospital services, and not all hospitals receive it. Two funding sources are available through the Medicaid program. First, Medicaid provides some coverage for eligible undocumented aliens, such as low-income children and pregnant women. Not all undocumented aliens are eligible for or enrolled in Medicaid, however, and this coverage is limited to emergency medical services, including emergency labor and delivery. Second, Medicaid DSH adjustments are available to some hospitals treating relatively large numbers of low-income patients, including undocumented aliens. Finally, under the provisions of BBA, $25 million was available annually, from fiscal years 1998 through 2001, to assist certain states with their costs of providing emergency services to undocumented aliens regardless of Medicaid eligibility. According to state Medicaid officials in the states we reviewed, states used these funds to help recover the state share of Medicaid expenditures for undocumented aliens, and not to recover hospitals’ costs of care for undocumented aliens not eligible for Medicaid. Recent legislation appropriated additional federal funding—$250 million annually for fiscal years 2005 through 2008—for payments to hospitals and other eligible providers for emergency medical services delivered to undocumented and certain other aliens. Undocumented aliens may qualify for Medicaid coverage for treatment of an emergency condition if, except for their immigration status, they meet Medicaid eligibility requirements. Medicaid coverage is also limited to care and services necessary for treatment of emergency conditions for certain legal aliens—including lawful permanent resident aliens who have resided in the United States for less than 5 years and aliens admitted into the United States for a limited time, such as some temporary workers. We refer to Medicaid coverage for these groups of individuals—that is, those whose coverage is limited to treatment of emergency conditions—as emergency Medicaid. Because immigration status is a factor when states determine an individual’s Medicaid coverage, people applying for Medicaid are asked about their citizenship and immigration status as a part of the Medicaid eligibility determination process. State Medicaid officials in the 10 states that we reviewed reported spending more than $2 billion in fiscal year 2002 for emergency Medicaid expenditures (see table 1). Although states are not required to identify or report to CMS their Medicaid expenditures specific to undocumented aliens, several states provided data or otherwise suggested that most of their emergency Medicaid expenditures were for services provided to undocumented aliens. According to data provided by state Medicaid officials in 5 of the 10 states, at least half of emergency Medicaid expenditures in these states were for labor and delivery services for pregnant women. Emergency Medicaid expenditures in the 10 states have increased over the past several years but remain a small portion of each state’s total Medicaid expenditures. In 9 of the 10 states we reviewed, emergency Medicaid expenditures grew faster than the states’ total Medicaid expenditures from fiscal years 2000 to 2002. For example, while Georgia’s total Medicaid expenditures increased by 44 percent during this period, the state’s emergency Medicaid expenditures increased 349 percent—nearly eight times as fast. Nevertheless, emergency Medicaid expenditures in these states accounted for less than 3 percent of each state’s total Medicaid expenditures. Emergency Medicaid funding is limited in that not all undocumented aliens treated at hospitals are eligible for Medicaid, not all eligible undocumented aliens enroll in Medicaid, and not all hospital services provided to enrolled undocumented aliens are covered by Medicaid. Not all undocumented aliens are eligible for Medicaid. Undocumented aliens are eligible for emergency Medicaid coverage only if, except for immigration status, they meet Medicaid eligibility criteria applicable to citizens. Many state hospital association officials we interviewed commented that hospitals were concerned about undocumented aliens who do not qualify for Medicaid. To qualify, undocumented aliens must belong to a Medicaid-eligible category—such as children under 19 years of age, parents with children under 19, or pregnant women—and meet income and state residency requirements. Arizona hospital and Medicaid officials said that many undocumented aliens treated at their hospitals are only passing through the state and cannot meet Medicaid state residency requirements. However, comprehensive data are not available to determine the extent to which undocumented aliens receiving care in hospitals are not eligible for Medicaid coverage. Not all eligible undocumented aliens enroll in Medicaid. Factors besides eligibility may also influence the number of eligible undocumented aliens who actually enroll in Medicaid and receive coverage. According to officials in most state Medicaid offices and hospital associations we interviewed, fear of being discovered by immigration authorities is one factor that can deter undocumented aliens from enrolling. Enrollment in Medicaid involves filling out an application; providing personal information such as income and place of residency; and, in some states, an interview. Also, because undocumented aliens are generally covered by Medicaid only for the duration of an emergency event, they may have to reenroll each time they receive emergency services. Not all hospital services provided to undocumented aliens enrolled in Medicaid are covered. Medicaid coverage for undocumented aliens is limited to treatment of an emergency medical condition. Hospital association officials in 7 of the 10 states we reviewed reported that a concern of hospitals is the cost of treatment for undocumented aliens that continues beyond emergency services and is not covered by Medicaid. Aside from anecdotal information, however, data are not available to determine the extent to which hospitals are treating undocumented aliens for nonemergency conditions. Further, within federal guidelines, the services covered under emergency Medicaid may vary from state to state. According to an eligibility expert in CMS’s Center for Medicaid and State Operations, the agency’s position is that each case needs to be evaluated on its own merits, and the determination of what constitutes an emergency medical service is left to the state Medicaid agency and its medical advisors. Medicaid DSH payments are another source of funding available to some hospitals that could help offset the costs of treating undocumented aliens. Under the Medicaid program, states make additional payments, called DSH adjustments, to qualified hospitals serving a disproportionate number of Medicaid beneficiaries and other low-income people, which can include undocumented aliens. As with other Medicaid expenditures, states receive federal matching funds for DSH payments to hospitals. Medicaid DSH allotments—the maximum federal contribution to DSH payments—totaled $5 billion in fiscal year 2002 in the 10 states we reviewed. All hospitals, however, do not receive these funds. In general, a hospital qualifies for DSH payments on the basis of the relative amount of Medicaid service or charity care it provides. Care provided to undocumented aliens could fall into one of these categories. The extent to which hospitals benefit from DSH payments depends on how states administer the DSH program. Medicaid officials in some states we reviewed said that some hospitals transfer money to the state to support the state’s share of the DSH program; such transfers reduce the net financial benefit of DSH payments to these hospitals. Federal funding provided under BBA was made available to help states recover their costs of emergency services furnished to undocumented aliens regardless of Medicaid eligibility; the states we reviewed opted to use this money to help recover the state share of emergency Medicaid expenditures. BBA made $25 million available for each of fiscal years 1998 through 2001 for distribution among the 12 states with the highest numbers of undocumented aliens. INS estimates of the undocumented alien population in 1996 were used to identify the 12 states. Seven of the 10 states we reviewed were eligible for a portion of these allotments; 6 of the 7 states claimed these funds. BBA allotments for these 6 states accounted for 91 percent of the $25 million available each year. States could use the funds to help recover (1) the state share of emergency Medicaid expenditures for undocumented aliens and/or (2) other state expenditures or those of political subdivisions of the state, for emergency services provided to those undocumented aliens not eligible for Medicaid. In each of the 6 states, Medicaid officials reported using the state’s entire BBA payment to recover a portion of what the state had already paid for undocumented aliens under emergency Medicaid. These funds were not used to cover hospitals’ costs for the care of undocumented aliens not eligible for Medicaid. In commenting on BBA funding, state hospital association officials in 5 of the 7 states we interviewed that were eligible for this funding said that the amount was too low. For example, in fiscal year 2001, BBA allotments for undocumented aliens for the two states with the largest ($11,335,298) and smallest ($651,780) allotments accounted for less than 2 percent of reported emergency Medicaid expenditures in those states. Officials from several state hospital associations, as well as from the American Hospital Association, reported that their members would like any additional federal funding for undocumented aliens to be distributed to hospitals more directly. Some state hospital association and state Medicaid officials nevertheless acknowledged matters that would need to be addressed in order to distribute funds to hospitals for undocumented aliens not covered by emergency Medicaid, including how hospitals would identify, define, and document expenditures for emergency services provided to these undocumented aliens. As mentioned above, fewer than 5 percent of hospitals responding to our survey reported having a method for identifying undocumented alien patients other than tracking patients without a Social Security number. The recently enacted Medicare Prescription Drug, Improvement, and Modernization Act of 2003 appropriated additional funds, beginning in fiscal year 2005, for payments to hospitals and other providers for emergency medical services furnished to undocumented and certain other aliens. Section 1011 of the act appropriated $250 million for each of fiscal years 2005 through 2008 for this purpose. Two-thirds of the funds are to be distributed according to the estimated proportion of undocumented aliens residing in each state; the remaining one-third is designated for the six states with the highest number of apprehensions of undocumented aliens as reported by Homeland Security. These new funds are to be paid directly to eligible providers, such as hospitals, physicians, and ambulance services, for emergency medical services provided to undocumented and certain other aliens that are not otherwise reimbursed. Payment amounts will be the lesser of (1) the amount the provider demonstrates was incurred for provision of emergency services or (2) amounts determined under a methodology established by the Secretary of Health and Human Services. By September 1, 2004, the Secretary is required to establish a process for providers to request payments under the statute. Both Border Patrol agents and U.S. port-of-entry officials come into contact with people needing emergency medical assistance whom they refer or allow to enter for care, but in most situations, Homeland Security is not responsible for the resulting costs of emergency medical assistance. Homeland Security may cover medical expenses only of people taken into custody, but Border Patrol officials said that when they encounter people with serious injuries or medical conditions, they generally refer the individuals to local hospitals without first taking them into custody. The agency does not track the number of aliens it refers to hospitals in this fashion. Similarly, undocumented aliens arriving at U.S. ports of entry with emergency medical conditions may be granted humanitarian parole for urgent medical reasons, but they are not in custody, and Homeland Security is not responsible for their medical costs. Although the Border Patrol does not have an agencywide formal written policy regarding encounters with sick or injured persons, Border Patrol officials and documents we obtained indicate that the Border Patrol’s first priority in such encounters is to obtain medical assistance and, if necessary, arrange transportation to a medical facility. According to Border Patrol officials, agents generally do not take sick or injured persons into custody on the scene, and because the individuals are not in custody, Homeland Security is not responsible for their medical costs. Under federal law, the U.S. Public Health Service, within the Department of Health and Human Services, is authorized to pay the medical expenses of persons in the custody of immigration authorities. Under an interagency agreement, Homeland Security is responsible for reimbursing the Department of Health and Human Services for hospital care provided to such persons. The statute does not grant the Public Health Service the authority to cover the medical expenses of aliens not in custody, and therefore Homeland Security is not responsible for these medical costs. Border Patrol officials provided a number of different reasons for not first taking injured or sick persons they have encountered into custody. Several officials said, for example, that Border Patrol agents assume a humanitarian role when encountering persons needing emergency medical care, and their first concern is obtaining medical assistance. In addition, many officials said that an injured or sick person’s condition may affect his or her ability to reliably answer questions about immigration status. Some Border Patrol officials and documents indicated that taking all sick or injured persons into custody would not be consistent with the agency’s primary enforcement mission. They explained that the Border Patrol does not have the resources to pursue a prosecution of every possible violation of law, so agents exercise their prosecutorial discretion and concentrate resources on those violations that will produce maximum results in accomplishing their mission. Further, according to statute, an immigration officer may not arrest an alien without a warrant unless the officer has reason to believe that the person is in the United States in violation of immigration law and is likely to escape before a warrant can be obtained. Some officials maintained that when aliens encountered need medical attention and are considered unlikely to escape, they are generally not taken into custody. Border Patrol officials reported that in certain instances, agents may take particular persons into custody while they are in the hospital. For example, if agents encounter an individual who is of particular law enforcement interest—such as a suspected smuggler of drugs or aliens— they may take that individual into custody. Doing so may involve posting a guard at the hospital. In these circumstances, Homeland Security would assume responsibility for any costs of care once the individual is placed into custody. Border Patrol agents in the Miami sector encounter sick or injured aliens under conditions slightly different from those in the Southwest, but their practices in such encounters are generally consistent with those reported by the nine Southwest sectors and with Border Patrol’s general unwritten policy and practice. According to Miami sector officials, because the sector has fewer than 100 agents to cover more than 1,600 coastal miles in Florida, Georgia, South Carolina, and North Carolina, Miami sector agents typically come into contact with aliens in response to calls from other law enforcement agencies. If the other law enforcement agency called for local emergency medical services before Miami Border Patrol sector agents determined the person’s immigration status, Border Patrol agents would not take that person into custody and Homeland Security would not be responsible for his or her medical costs. According to Miami sector officials, Homeland Security is responsible for medical costs only for those people taken into custody after their immigration status has been determined, and agents follow up at the hospital only with these patients. If another law enforcement agency refers the person to the hospital, Border Patrol agents said they do not follow up unless called by the hospital upon the patient’s release, and then only if agents are available to respond. Undocumented aliens are also intercepted at sea by the U.S. Coast Guard. Coast Guard cutters have trained medical personnel on board, and according to officials in the agency’s Migrant Interdiction Division, when Coast Guard personnel encounter sick or injured undocumented aliens, their practice is to treat them at sea to the extent possible and return them to their home countries once they are stabilized. On occasion, persons encountered at sea with severe medical conditions may need to be transported to shore or directly to a hospital, but this situation rarely occurs. In fiscal year 2002, the Coast Guard brought 9 aliens to shore for medical care and in fiscal year 2003, brought in 14. According to Coast Guard officials, the agency has no responsibility to pay for care of those aliens brought to shore for medical treatment. It is unknown how often the Border Patrol refers sick or injured aliens not taken into custody to hospitals. Border Patrol officials said the agency does not track the total number of encounters with sick or injured persons. What is known is how much the Department of Health and Human Services pays for care, subject to reimbursement from Homeland Security, for those already in Border Patrol custody. In fiscal year 2003, the Department of Health and Human Services paid about $1.7 million in medical claims for people in Border Patrol custody, of which about $1.2 million was for hospital inpatient and outpatient expenses. Data are also available on Border Patrol encounters with aliens that the agency categorized as rescues—that is, incidents in which death or serious injury would have occurred had Border Patrol agents not responded—but these data do not include all encounters with aliens who were referred to hospitals without first having been taken into custody. Our analysis of Border Patrol rescue data for the nine sectors on the U.S.-Mexican border shows that in fiscal year 2002 about 360 suspected undocumented aliens were rescued and referred to hospitals for care. Rescued aliens were referred to hospitals for a variety of medical reasons, including heat exposure, possible heart attack, injuries, and complications from pregnancy. Nearly half the referrals occurred in the Tucson Border Patrol sector, which covers most of Arizona. Homeland Security is not authorized to pay the medical costs of aliens granted humanitarian parole at U.S. ports of entry for urgent medical reasons because these individuals are not in custody. Humanitarian paroles for urgent medical reasons are granted by port directors on a case- by-case basis and, according to most officials responsible for ports of entry whom we interviewed, only when the alien is in medical distress or a “life-or-death situation,” such as after a severe head trauma. Some port-of- entry officials cited instances when they turned aliens away because they believed that the medical conditions were not urgent and medical facilities in Mexico could provide treatment. When humanitarian paroles for urgent medical reasons are granted, a formal record of arrival is completed to document the aliens’ entry into the United States. Sometimes, port-of-entry officials know in advance that an injured alien will be arriving, and the form is completed beforehand. If medical urgency prevents completion of this form at the port of entry, an official will go to the hospital to obtain the necessary information. The length of time a paroled alien is allowed to remain in the United States is determined case by case but cannot exceed 1 year. Like all other aliens who enter for a temporary period, a paroled alien is expected to leave when his or her authorized stay ends. Office of Field Operations data show that from June 1 through October 31, 2003, officials at 7 of the 24 ports of entry along the U.S.-Mexican border granted a total of 54 humanitarian paroles for urgent medical reasons. Almost two-thirds (35) of these paroles were granted at the Columbus port of entry in New Mexico and brought to one local hospital. A Columbus port-of-entry official stated that the limited capability of the nearby medical facility in Mexico contributes to the high number of humanitarian paroles granted for urgent medical reasons at the port. The hospital that treated most of the paroled patients reported receiving no payment for any of the 27 patients paroled from June through August 2003 and noted that 4 of these patients were later transferred to other hospitals for further care. The other 19 paroles occurred at three ports of entry in Arizona and three ports of entry in Texas, near small towns straddling the border. Most (17 of 24) of the Southwest border ports of entry reported granting no paroles for urgent medical reasons from June through October 2003. Officials at three ports of entry we reviewed granted no humanitarian paroles for urgent medical reasons during that time and are located near large cities in Mexico. Officials at one of these ports of entry told us that hospital care is available in the Mexican cities across the border, so that Mexican residents need not be treated at U.S. hospitals. Hospital officials in Arizona noted that several Arizona hospitals and the U.S. government have provided funds and equipment to help improve the capabilities of nearby Mexican medical facilities and that these measures helped reduce their burden of cases from Mexico. Finally, although aliens may be granted humanitarian parole for urgent medical reasons, several port-of-entry officials told us that the majority of persons seeking entry into the United States for emergency medical care have proper entry documents. For example, some aliens arriving at U.S. hospitals may be Mexican nationals with border crossing cards, which allow entry into the United States within 25 miles of the border for business or pleasure for up to 72 hours. Another port official reported that many U.S. citizens live in Mexico and sometimes arrive in ambulances to go to U.S. hospitals. According to some officials responsible for ports of entry, hospitals may not be fully aware of the immigration status of patients who have crossed the border to obtain emergency medical care; this uncertainty may create the impression that ports are granting more humanitarian paroles for urgent medical reasons than they are. Despite hospitals’ long-standing concern about the costs of treating undocumented aliens, the extent to which these patients affect hospitals’ uncompensated care costs remains unknown. The lack of reliable data on this patient population and lack of proven methods to estimate their numbers make it difficult to determine the extent to which hospitals treat undocumented aliens and the costs of their care. Likewise, with respect to undocumented aliens referred to hospitals but not first taken into custody by the Border Patrol, neither the Border Patrol nor hospitals track their numbers, making it difficult to estimate these patients’ financial impact on hospitals. Until reliable information is available on undocumented aliens and the costs of their care, accurate assessment of their financial effect on hospitals will remain elusive, as will the ability to assess the extent to which federal funding offsets their costs. The availability of new federal funding under the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 may offer an incentive for hospitals serving undocumented aliens to collect more reliable information on the numbers of these patients and the costs of their care. To help ensure that funds appropriated by the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 are not improperly spent, we recommend that the Secretary of Health and Human Services, in establishing a payment process, develop appropriate internal controls to ensure that payments are made to hospitals and other providers only for unreimbursed emergency services for undocumented or certain other aliens as designated in the statute. In doing so, the Secretary should develop reporting criteria for providers to use in claiming these funds and periodically test the validity of the data supporting the claims. We provided officials in CMS and Homeland Security an opportunity to comment on a draft of this report. In its comments, CMS concurred with our recommendation that the Secretary develop appropriate internal controls and stated that the agency expects to develop appropriate internal controls regarding funds appropriated by section 1011 of the Medicare Prescription Drug, Improvement, and Modernization Act. The agency said it is currently developing a process for providers to claim these funds and indicated that it would be helpful for GAO to provide insight into the specific internal controls that would be useful in ensuring that claims are paid only for unreimbursed emergency services for undocumented and certain other aliens. In response to CMS’s request, we amended our recommendation to be more specific. CMS also agreed that the new federal funding may offer an incentive for those hospitals incurring significant costs for undocumented aliens to collect more reliable information on the number of undocumented alien patients they treat and the costs of their care, but it also noted that other providers, especially those who do not regularly see undocumented aliens in emergency department settings, may choose to continue to provide uncompensated care to this population without ever trying to document the costs. CMS also provided technical comments, which we incorporated as appropriate. Homeland Security generally agreed with the report’s findings and provided some technical comments regarding parole and the numbers of ports of entry, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from its date. We will then make copies available to other interested parties upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions, please contact me at (202) 512-7119. Additional GAO contacts and the names of other staff members who made major contributions to this report are listed in appendix V. To collect information on the extent to which hospitals’ uncompensated care costs are related to treating undocumented aliens, we mailed a questionnaire to a sample of more than 500 hospitals in 10 states— Arizona, California, Florida, Georgia, Illinois, New Jersey, New Mexico, New York, North Carolina, and Texas. We selected the 4 Southwest states—Arizona, California, New Mexico, and Texas—because uncompensated care costs due to treating undocumented aliens has been a long-standing issue for hospitals located in communities near the U.S.- Mexican border. We selected the other 6 states because high estimated numbers of undocumented aliens resided there in 2000, according to the Immigration and Naturalization Service (INS). In all, the 10 states comprised an estimated 78 percent of the population of undocumented aliens in the United States in 2000. (See table 2.) We sent our survey to a randomly selected stratified sample of 503 of 1,637 short-term, nonfederal, general medical and surgical care hospitals that— according to either the American Hospital Association’s annual survey database, fiscal year 2000, or the Centers for Medicare & Medicaid Services Provider of Service File as of the end of 2000—had an emergency department. Table 3 shows the characteristics of the universe from which the hospitals were sampled. From this universe of hospitals, we sampled 100 percent of the hospitals in Arizona and New Mexico. In the other 8 states, we stratified the sample by state, hospital ownership, and estimates of undocumented aliens by county. Our survey included questions about the hospital, such as (1) whether it had an emergency department in fiscal year 2002; (2) the number of staffed beds on the last day of fiscal year 2002; (3) financial information on bad debt and charity care charges, total expenses, gross patient revenue, and other operating revenue; (4) whether the hospital routinely collected Social Security numbers and, for fiscal year 2002, total inpatient days and the number of inpatient days for people without a Social Security number, our proxy for undocumented aliens; and (5) as a means of evaluating the accuracy of the proxy, whether the hospital used a method other than lack of a Social Security number alone to identify undocumented aliens. After speaking with hospital officials, we concluded that although lack of a Social Security number could potentially over- or underestimate the actual population of undocumented aliens treated by a hospital, it might be the least burdensome way for hospitals to provide us with information for our survey that would allow us to attempt to identify care given to undocumented aliens. We included a question on the survey asking hospitals to report the number of inpatient days for patients without a Social Security number. We used this information, along with total inpatient days reported, to calculate the proportion of inpatient days for patients without a Social Security number in order to approximate the proportion of inpatient care provided to undocumented aliens. Although undocumented aliens may first seek care through hospital emergency departments, we focused on inpatient care because hospital officials reported that patient data, including Social Security numbers, are generally more complete for persons admitted as inpatients; persons treated in the emergency department are often released before such information can be collected. Further, although a large number of patients may be seen in emergency departments, hospital officials reported that the majority of uncompensated care cost is incurred in inpatient settings. We could not establish the accuracy of our proxy before carrying out the survey, so to assess our proxy, we included a survey question on hospitals’ methods for estimating undocumented aliens. We were, however, unable to determine our proxy’s accuracy. Fewer than 5 percent of hospitals responding to the survey reported that they had methods of estimating undocumented aliens other than lack of Social Security number alone. These methods varied among the hospitals and led to estimates inconsistent with those based on lack of a Social Security number. We also pretested our questionnaire in person with officials at six hospitals to determine if it was understandable and if the information was feasible to collect, and we refined the questionnaire as appropriate. We conducted follow-up mailings and telephone calls to nonrespondents. We obtained responses from 351 hospitals, for an overall response rate of about 70 percent. Of the hospitals that returned surveys, 300 provided financial information to calculate uncompensated care levels—defined as uncompensated care as a percentage of total expenses—but only 198 (39 percent of all hospitals surveyed) provided sufficient information to allow us to examine the relationship between hospitals’ uncompensated care levels and the percentage of inpatient days for patients without a Social Security number. We performed checks for obvious errors and inconsistent data but did not independently verify the information hospitals provided in the survey. Three hundred hospitals provided sufficient information to calculate uncompensated care levels. Table 4 shows financial information for these hospitals; this information is not generalizable to the overall population. For the 198 hospitals that provided sufficient information, we examined the variation in uncompensated care levels by percentage of inpatient days attributable to patients without a Social Security number after dividing the distribution of the latter into thirds. Table 5 shows this information for these 198 hospitals; this information is not generalizable to the overall population. Factors other than the percentage of inpatient days attributable to patients without a Social Security number, such as the extent to which hospitals treat uninsured patients (including uninsured patients with a Social Security number), could affect the variation in uncompensated care levels among hospitals. Since a high proportion of hospitals we surveyed did not provide us with information to calculate the percentage of inpatient days attributable to patients without a Social Security number, and we could not validate the accuracy of this proxy, we cannot evaluate either the relationship between the percentage of inpatient days attributable to patients without a Social Security number and hospitals’ uncompensated care levels, or to what extent hospitals’ uncompensated care costs are related to treating undocumented aliens. To determine the availability of federal funding sources to assist hospitals with the costs of treating undocumented aliens, we reviewed relevant literature and legal documents, spoke with officials at the Centers for Medicare & Medicaid Services (CMS), and interviewed state Medicaid and hospital association officials in the same 10 states in which we surveyed hospitals—Arizona, California, Florida, Georgia, Illinois, New Jersey, New Mexico, New York, North Carolina, and Texas. Specifically, to assess the availability of Medicaid to cover hospitals’ costs of treating undocumented aliens, we reviewed Medicaid eligibility and Medicaid disproportionate share hospital (DSH) laws and regulations and interviewed state Medicaid officials about Medicaid coverage, eligibility requirements, and DSH programs in their states. We collected data on total state Medicaid expenditures and DSH allotments from CMS and on emergency Medicaid expenditures from state Medicaid officials. We assessed the reliability of the above data by interviewing agency individuals knowledgeable about the data. After reviewing state expenditure and DSH allotment figures for logic and following up where necessary, we determined that these data sources were sufficiently reliable for the purposes of this report. We also reviewed published reports and spoke with state hospital association officials about impediments to obtaining Medicaid coverage for undocumented aliens treated at hospitals. To determine the availability of federal funds allotted to states through the Balanced Budget Act of 1997 (BBA) for emergency services furnished to undocumented aliens, we obtained information on BBA allotments to states and interviewed state Medicaid officials in the seven states in our review that were eligible to receive these funds about how they used the funds. We also reviewed CMS guidance relevant to BBA’s section on emergency medical services for undocumented aliens and interviewed hospital association officials. In addition, we reviewed the provisions in the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 pertaining to payments to providers for treating undocumented and other aliens, and we interviewed CMS officials about their plans to implement these provisions. To determine the responsibility of the Department of Homeland Security (Homeland Security) for covering the medical costs of sick or injured aliens encountered by Border Patrol agents, we reviewed relevant laws, regulations, and legal opinions and interviewed Border Patrol officials in headquarters, in the nine sectors along the U.S.-Mexican border, and in the Miami sector. We also interviewed Coast Guard officials about their encounters with sick or injured aliens at sea. We obtained data from the Department of Health and Human Services’ Division of Immigration Health Services on payments for medical claims for aliens in Border Patrol custody. We also obtained and analyzed data from the Border Patrol’s Border Safety Initiative database to determine how many of the suspected undocumented aliens counted as rescues by the Border Patrol were transported to local hospitals. We assessed the reliability of these data by interviewing agency officials knowledgeable about the data, reviewing the data for logic and internal consistency, and following up with officials where necessary. We determined that the data on payments for medical claims for aliens in Border Patrol custody and on suspected undocumented aliens rescued by the Border Patrol were sufficiently reliable for the purposes of this report. To determine the responsibility of Homeland Security for covering the medical costs of aliens seeking humanitarian parole for urgent medical reasons at ports of entry, we interviewed officials in the four Field Operations offices responsible for ports of entry along the U.S.-Mexican border and at five of the ports of entry: Brownsville, Texas; Columbus, New Mexico; Douglas, Arizona; El Paso, Texas; and San Ysidro, California. At the El Paso port of entry, we interviewed officials at the port’s busiest crossing point, Paso Del Norte. We selected these five ports of entry for geographic diversity or because they had granted a large number of paroles. We reviewed relevant laws, regulations, and procedures regarding parole authority. Because Homeland Security did not normally collect data on the number of paroles granted specifically for urgent medical treatment, we requested that the Office of Field Operations record the number of such paroles granted at ports of entry along the U.S.-Mexican border. In addition to those named above, Carla D. Brown, Ellen W. Chu, Jennifer Cohen, Michael P. Dino, Jennifer Major, Kevin Milne, Dae Park, Karlin Richardson, Sandra Sokol, Adrienne Spahr, Leslie Spangler, and Marie C. Stetser made key contributions to this report.
About 7 million undocumented aliens lived in the United States in 2000, according to Immigration and Naturalization Service estimates. Hospitals in states where many of them live report that treating them can be a financial burden. GAO was asked to examine the relationship between treating undocumented aliens and hospitals' costs not paid by patients or insurance. GAO was also asked to examine federal funding available to help hospitals offset costs of treating undocumented aliens and the responsibility of the Department of Homeland Security (Homeland Security) for covering medical expenses of sick or injured aliens encountered by Border Patrol and U.S. port-of-entry officials. To conduct this work, GAO surveyed 503 hospitals and interviewed Medicaid and hospital officials in 10 states. GAO also interviewed and obtained data from Homeland Security officials. Hospitals generally do not collect information on their patients' immigration status, and as a result, an accurate assessment of undocumented aliens' impact on hospitals' uncompensated care costs--those not paid by patients or by insurance--remains elusive. GAO attempted to examine the relationship between uncompensated care and undocumented aliens by surveying hospitals, but because of a low response rate to key survey questions and challenges in estimating the proportion of hospital care provided to undocumented aliens, GAO could not determine the effect of undocumented aliens on hospitals' uncompensated care costs. Federal funding has been available from several sources to help hospitals cover the costs of care for undocumented aliens. The sources include Medicaid coverage for emergency medical services for eligible undocumented aliens, supplemental Medicaid payments to hospitals treating a disproportionate share of low-income patients, and funds provided to 12 states by the Balanced Budget Act of 1997. In addition, the recently enacted Medicare Prescription Drug, Improvement, and Modernization Act of 2003 appropriated $1 billion over fiscal years 2005 through 2008 for payments to hospitals and other providers for emergency services provided to undocumented and certain other aliens. By September 1, 2004, the Secretary of Health and Human Services must establish a process for hospitals and other providers to request payments under the statute. Border Patrol and U.S. port-of-entry officials encounter aliens needing medical attention under different circumstances, but in most situations, Homeland Security is not responsible for aliens' hospital costs. The agency may cover medical expenses only for those people in its custody, but border officials reported that sick or injured people they encounter generally receive medical attention without being taken into custody.
DRO is responsible for the detention of aliens in its custody pending civil administrative immigration removal proceedings. According to ICE, as of the week ending December 31, 2006, there were 27,607 aliens detained in ICE custody at 330 adult detention facilities nationwide. The majority of ICE’s alien detainee population is housed with general population inmates in about 300 state and local jails that have intergovernmental service agreements with ICE. In addition to being housed in these state and local facilities, alien detainees are also housed in 8 ICE-owned service processing centers and 6 contract detention facilities operated by private contractors specifically for ICE alien detainees. In addition to these adult detention facilities, ICE contracts for the operation of 19 juvenile and 3 family detention facilities. According to ICE officials, they maintain custody of one of the most highly transient and diverse populations of any correctional or detention system in the world. This diverse population includes individuals from different countries; with varying security risks (criminal and noncriminal); with varying medical conditions; and includes males, females, and families of every age group. As of January 2007, the countries with the largest numbers of alien detainees in ICE custody were Mexico, El Salvador, Guatemala, Honduras, the Dominican Republic, and Haiti. See appendix II for additional information on alien detention population statistics. ICE’s National Detention Standards are derived from the American Correctional Association Third Edition, Standards for Adult Local Detention Facilities. ICE officials said that they were currently in the process of revising the National Detention Standards based on American Correctional Association Fourth Edition, Performance-Based Standards for Adult Local Detention Facilities. In most cases ICE standards mirror American Correctional Association (ACA) standards. However, in some cases ICE standards exceed or provide more specificity than ACA standards to address the unique needs of alien detainees. As an example, ICE standards specify that a detainee should receive a tuberculosis test upon intake, while ACA standards do not. Further, ICE standards require a detailed list of immigration-related legal reference materials be made available in law libraries, while ACA standards do not specify the type and nature of legal materials to be made available. Also, exceeding ACA standards, ICE standards specify that, when possible, use of force on detainees be videotaped and that certain informational materials provided to detainees, such as fire evacuation instructions and detainee handbooks, be available in Spanish. The ICE National Detention Standards are to apply to ICE-owned detention facilities and those state and local facilities that house alien detainees. The standards are not codified in law and thus represent guidelines rather than binding regulations. According to ICE officials, ICE has never technically terminated an agreement for noncompliance with its detention standards. However, under ICE’s Detention Management Control Program policies and procedures, ICE may terminate its use of a detention facility and remove detainees or withhold payment from a facility for lack of compliance with the standards. Separate standards addressing the treatment of juvenile aliens are used for juvenile secure detention and shelter facilities. Both adult and juvenile detention facility standards are used at ICE’s family shelter facilities because of the unique classification of family shelters. There are 38 ICE National Detention Standards for adult detention facilities; 18 standards are related to detainee services, 4 are related to detainee health services, and 16 are related to security and control. More detailed information on each of these standards is provided in appendix IV. According to ICE officials, in addition to being required to comply with ICE’s National Detention Standards, some ICE detention facilities are accredited by ACA. For example, seven of eight ICE service processing centers and five of six contract detention facilities are accredited by ACA. In addition, according to ICE officials, some of the over 300 intergovernmental service agreement facilities are also accredited by ACA. Moreover, some facilities are also accredited by the Joint Commission on Accreditation of Health Care Organizations (JCAHO), the predominant standards-setting and accrediting body in health care, and the National Commission on Correctional Health Care (NCCHC), which offers a health services accreditation program to determine whether correctional institutions meet its standards in their provision of health services. Our field observations at 23 alien detention facilities showed systemic telephone system problems at 16 of 17 facilities that use the pro bono telephone system, but no pattern of noncompliance for other standards we reviewed. Problems with the pro bono telephone system restrict detainees’ abilities to reach their consulates, nongovernmental organizations, pro bono legal assistance providers, and the OIG complaint hotline. ICE and facility officials told us that they did not know the specific nature and extent of the problem prior to our review. ICE’s lack of awareness and insufficient internal controls appear to have perpetuated telephone system problems for several years. Similarly, there were insufficient internal controls at the facilities to ensure posted phone numbers were kept up to date or otherwise accurate. For instance, one facility told us that the only means to know if a posted phone list was out of date or inaccurate was if a detainee complained. In addition to telephone problems, we also observed a lack of compliance with one or more aspects of other individual detention standards at 9 of the 23 sites we visited. These instances of noncompliance varied across facilities that we visited, and unlike the telephone system problems, did not appear to show a persistent pattern of noncompliance. Other examples of deficiencies included food service issues such as kitchen cleanliness and menu rotation, failure to follow medical care policy at intake, hold room policy violations such as lack of logbooks and overcrowding, and potential use of force violations such as the potential for use of dogs and/or Tasers, since some facilities had the use of Tasers either authorized in policy or facility officials stated they used these methods. Last, we also observed detainees being housed in numbers that exceeded the rated capacity at 4 of the 23 sites we visited. ICE alien detention standards specify that detainees be provided the ability to make telephone calls, at no charge to themselves or to the recipients, to their respective consulates, designated pro bono legal service providers, and the OIG complaint hotline, among others. The pro bono telephone system is to ensure that detainees have access to authorized legal representatives, that aliens who wish to retain counsel are not prevented from doing so, and to allow detainees to contact their home country consulates to seek assistance. The pro bono telephone system is also to ensure that detainees can voice complaints regarding their conditions of confinement to organizations with responsibility for investigating or monitoring detainee treatment. In order to meet the telephone access standard, ICE contracted with Public Communications Services (PCS) to provide a pro bono telephone system to enable detainees to contact the aforementioned parties at no charge. The term of the contract is January 22, 2004, to January 21, 2009, and consists of a 12-month base period and four 12-month options. Of the 23 detention facilities we visited, 17 facilities utilize the PCS detainee telephone system. We performed telephone tests at all 17 of these facilities. Our tests consisted of dialing the OIG’s complaint line, consulates, nongovernmental organizations, and pro bono legal service providers from the numbers posted next to the telephones to determine if we could get a connection. All of the phones were in good working order, and we observed that detainees could successfully place personally funded phone calls using calling cards purchased from the facility. However, we often could not connect to the telephone numbers for the OIG, consulates, and pro bono legal providers. At 16 of 17 detention facilities where we performed test calls through the pro bono telephone system, we encountered numerous failures that ranged from incomplete and inaccurate phone number postings to a variety of technical system failures that would not permit the caller to make the desired connections. For example, during our facility visits, we observed posters advertising the OIG complaint hotline 1-800 number. However, we found that the OIG number was blocked or otherwise restricted at 12 of the facilities that we tested. Typical problems that we encountered when dialing the OIG’s complaint line included getting a voice prompt stating that “this number is restricted,” “this is an invalid number,” or “a call to this number has been blocked by the telephone service provider.” Also, at 14 of the facilities using the PCS detainee telephone system, we could not complete phone connections to some consulates. We received messages such as a message stating that “all circuits are busy, call back at a later time” and “this number is restricted.” At the Pamunkey Regional Jail in Virginia we requested that the on-site Systems Administrator call PCS to determine why its pro bono telephone system was not working properly. The PCS service department informed him that there was a problem with the PCS system and it seemed to be at all facilities. Figure 1 shows a telephone with posted pro bono numbers at the Denver Contract Detention Facility, where we identified telephone system problems during our testing. Further, we found the PCS detainee telephone system to be cumbersome and complicated to use. For example, at Pamunkey Regional Jail, the automated system required eight different actions by the user to place a call. One of these actions added further confusion by instructing a detainee to select “collect call” in order to make a pro bono telephone system call. Similarly, at the Northwest Detention Center, detainees were offered only two voice prompt options when attempting to place a call using the pro bono telephone system: (1) to place a “collect call” and (2) to place a “credit card call.” In some cases, we found that the pro bono telephone system requires detainees to input their Alien Registration Number (commonly referred to as an “A” number) as part of the process for making a pro bono telephone call. In at least one facility, however, this in itself provided enough confusion among detainees to prevent them from making a successful call. At Pamunkey Regional Jail, we asked a group of about 40 detainees in one dorm if any of them could call any of the pro bono service numbers posted on the wall. We found that most of the detainees were not familiar with “A” numbers that would be required. In one case, when we asked a detainee for his “A” number, he referred to an unrelated number labeled “PIN” on his jail inmate wrist band. This may have been because the detainee handbook at Pamunkey refers to an “A” number as “PIN.” This problem was also recorded during a visit by UNHCR representatives at the same facility in 2005. Nevertheless, we obtained an actual “A” number from personal paperwork provided to us by one of the detainees. Using his legitimate “A” number, and working our way through numerous voice prompts, we could not make a connection using any of the pro bono legal service or consulate numbers posted at the Pamunkey Regional Jail. The facility compliance officer, the phone technician on site, and the ICE phone system contract technical representative who accompanied us at Pamunkey could offer no explanation for the pro bono telephone system failure. The telephone technician told us that he oversees frequent problems with the detainee phones at Pamunkey. At 16 of the 17 detention facilities we visited with the ICE pro bono telephone system, we found insufficient internal controls to ensure that telephone number postings are kept up to date and/or that the pro bono telephone system is functioning properly. For example, the phone number listings for pro bono legal providers and consulates were out of date and inaccurate at the Elizabeth Detention Facility. When we visited this facility in September 2006, the list of consulate numbers was 6 years old (dated 2000). We called 30 of the consulate numbers on the posted listing and determined that 9 of the numbers were incorrect. When we asked the ICE officer in charge on site why the consulate numbers were not up to date, he said he had no way of knowing if the phone numbers posted for the detainees were out of date unless someone complained. Additional examples include Pamunkey and Hampton Roads Regional Jails, where we found that consulate numbers were not listed for two countries with detainees of those nationalities present. Inaccurate or missing telephone numbers may preclude detainees from reaching consulates, pro bono legal providers, and the OIG complaint hotline, as required in ICE’s National Detention Standards. Officials we interviewed at the Department of Justice’s Executive Office for Immigration Review (EOIR) stated that their organization updates all local pro bono legal services phone numbers every 3 months and provides these updated phone numbers to all immigration courts. Some of these immigration courts are located within the detention facilities themselves. The EOIR officials stated that it is the responsibility of ICE staff to ensure that copies of updated pro bono phone lists are regularly picked up at the immigration courts and posted in the detainee dorms. Moreover, we did not have any problem obtaining current phone number lists for local pro bono legal services from the EOIR Web site. Current consulate phone numbers are also available on the Department of State’s Web site. Despite the availability of these numbers, ICE staff did not have procedures to ensure that the updated numbers were posted and provided to the phone system contractor to be programmed into the system on a regular basis. We found that most facilities that we visited were not aware that the pro bono telephone system was not operating properly because there were no internal control procedures for regularly testing the system. At two of the facilities we visited, the San Diego Correctional Facility and the Denver Contract Detention Facility, ICE’s recent compliance inspection reports cited facility officials for failing to properly monitor the pro bono phone system. When we tested the pro bono telephone system at the T. Don Hutto Family Shelter and found that we were unable to make most connections successfully, the facility officials established a new logbook and required the officer on duty in the detainee dorms to test the pro bono telephone system three times daily (8 a.m., 12 noon, and 4:30 p.m.) and record the results of these tests (satisfactory or unsatisfactory). We are not aware of any other immediate corrective action taken by other facilities that we visited. At Broward Transitional Center, facility officials stated that if a detainee had difficulty connecting through the pro bono call system, they provided an alternative means for a detainee to direct- dial these calls on a facility phone outside of the housing unit. Noting the poor performance of the pro bono telephone system it is important for facilities to post instructions for alternative means for detainees to complete calls in the event that the ICE pro bono telephone system is not functioning properly. We reviewed monthly pro bono telephone system performance reports provided to ICE by the pro bono phone system contractor for the last 5 years. The overall data show that over the 5-year period, 41 percent of calls placed through the system were not successful. This was consistent with problems we found during our site visits. These high failure rates indicated a systemic problem with the detainee pro bono telephone system. Figure 2 shows the monthly success rate for telephone calls placed through the pro bono telephone system from November 2005 to November 2006. Over this period, the rate of successful connections was never above 74 percent. Further, during the period between May and July 2006, on average, 60 percent of all attempted calls by detainees were not completed. The contractor-provided performance data also contained information on systemwide facility success rates for completed calls. When we reviewed these data, we found that individual facilities showed a similar trend of poor performance in completing calls when detainees attempted to use the pro bono call system. Our discussions with ICE contract administration officials, including contract officials in the Office of Acquisition and the Contracting Officer Technical Representative (COTR) in DRO, indicated that there was little oversight of the telephone contract being performed. The ICE contracting officer assigned responsibility for the PCS contract told us that the PCS phone contract was operating essentially on “autopilot” in that there was limited oversight being performed of the contract. The contracting official also stated that he had not been informed of any performance-related contract issues by the ICE COTR, who he said was responsible for monitoring the technical performance of the contractor. When we interviewed the COTR, he told us that he was assigned contract-related duties on a collateral basis while serving as a full-time compliance reviewer in the Detention Standards Compliance unit of DRO. A senior official in ICE’s Office of Acquisition said that the office faced significant challenges relating to turnover, understaffing, and loss of institutional knowledge regarding contract oversight and management. After several discussions with ICE contract and detention compliance officials concerning our findings of systemic problems with the pro bono calling system, the officials acknowledged that a lack of internal controls exists in its present system for monitoring detainee telephone system contractor performance and that greater contractor oversight is required. According to ICE officials, their intent is to issue a new telephone contract solicitation in the coming months. This contract for delivering pro bono telephone services is projected to be awarded by December 31, 2007. We found that detainee complaints over the high costs of phone calls were common. Under ICE’s pro bono telephone contract, PCS gains exclusive rights to provide paid telephone access to detainees at Service Processing Centers (SPC) and Contract Detention Facilities (CDF) to make paid telephone calls through the purchase of calling cards in denominations of $5, $10, or $20. The PCS rates range from $0.65 to $0.94 per minute for international calls and $0.06 to $0.17 a minute for domestic calls. Additionally, PCS contracts independently of ICE to provide paid telephone access at some intergovernmental service agreement (IGSA) facilities who determine their own rates apart from ICE. For instance, at the Pamunkey Regional Jail, an IGSA facility, detainees are charged $3.95 to connect and 89 cents a minute for long-distance calls. Under ICE’s agreement with PCS, PCS provides the pro bono platform to any IGSA facility that chooses to adopt it to meet the telephone access standard. ICE officials stated that approximately 200 facilities currently use the PCS pro bono telephone system. They also stated that some detention facilities have revenue-sharing agreements with PCS for a portion of the revenue resulting from the sale of the calling cards. ICE officials provided some information at the end of our review indicating that telephone commissions at some detention facilities range from 2 percent to 10 percent of calling card sales. According to ICE, revenue sharing is not a part of its ICE-PCS contract; however, PCS has negotiated agreements independently with detention facilities to make distribution to the detainees of the phone cards, collect money, etc. We were unable to examine the full extent of these contractual agreements across all ICE detention facilities. However, on June 1, 2007, the ICE Assistant Secretary requested a DHS Inspector General audit of the ICE pro bono telephone system and contract to include activity relevant to the sale and use of calling cards. Our review of correctional facility literature indicated that commissions resulting from telephone card sales can be as high as 20 percent to 60 percent. In addition to reviewing compliance with the telephone access standard, we focused on seven other detention standards. The standards we reviewed included medical care, hold rooms, use of force, food service, recreation, access to legal materials, and detainee grievance procedures. While we found deficiencies regarding these other standards, unlike the telephone system problems, these deficiencies did not show a systemic pattern of noncompliance from facility to facility. ICE standards state that detainees are to receive an initial medical screening immediately upon admission and a full medical assessment within 14 days. The policy also states that a health care specialist shall determine needed medical treatment. Medical service providers used by facilities include general medical, dental, and mental health care providers and are licensed by state and local authorities. Some medical services are provided by the U.S. Public Health Service (PHS), while other medical service providers work on a contractual basis. Facilities that we visited ranged from small clinics with contract staff to facilities with on-site medical staff and diagnostic equipment such as X-ray machines. According to ICE, when outside medical care appears warranted, then ICE will make the determination through a Managed Care Coordinator provided by PHS. Officials at some facilities told us that the special medical and mental health needs of detainees can be challenging. Some also cited difficulties in obtaining approval for outside medical and mental health care as also presenting problems in caring for detainees. We observed deficiencies in ICE’s Medical Care Standards at three facilities we visited. These facilities consisted of one adult detention facility, one family detention facility, and one juvenile detention facility. Specifically, at the San Diego Correctional Facility, ICE reviewers that we accompanied cited PHS staff for failing to administer the mandatory 14-day physical exam to approximately 260 detainees. PHS staff said the problem was due to inadequate training on the medical records system and technical errors in the records system. At a family detention center, Casa de San Juan Family Shelter, the facility staff did not administer medical screenings immediately upon admission, as required in ICE medical care standards. Finally, at the Cowlitz County Juvenile Detention Center, no medical screening was performed at admission and first aid kits were not available, as required. Figures 3 and 4 show examples of medical facilities at the Denver Contract Detention Facility and Berks County Prison, which provide on-site medical care. Hold rooms are used for temporary detention of individuals awaiting removal, transfer, medical treatment, intrafacility movement, or other processing into or out of the facility. ICE standards specify that detainees not be held longer than 12 hours in a hold room, and that logs be maintained documenting who is being held in hold rooms, how long they have been held, and what food and services have been provided to them. Deficiencies were observed in compliance with hold room standards at three detention facilities we visited. As we accompanied ICE reviewers at the San Diego Correctional Facility, the reviewers cited the facility for placing detainees in holding cells for longer than the 12-hour limit. The San Diego facility was also cited for failing to maintain an accurate hold room log with custodial information about detainees arriving at and departing from the facility. During our visit to the Denver Contract Detention Facility, we observed that the number of detainees in the hold rooms exceeded rated capacity and that the logbook was not properly maintained for individuals housed in the hold rooms. As a result, officers on duty could not determine how many detainees were being kept in hold rooms and meals were not recorded. In a compliance review closeout meeting with Denver facility officials, ICE reviewers identified the following deficiencies with the facility’s compliance with ICE’s hold room standards: hold rooms were over capacity; the detainee hold room logbook was incomplete and unreadable; and unsupervised detainees had placed wads of paper in hold room air vents. We also observed the absence of a hold room log at the North Las Vegas Detention Center. ICE’s use of force standard specifies that facilities have policies on the use of force that require documentation of the criteria for using force, the filing of incident reports, and review and consultation with medical staff before and after the use of force. If possible, service processing centers and contract detention facilities are required to videotape situations where it is anticipated that the use of force may occur. All facilities that we visited, with the exception of Casa de San Juan Family Shelter, had policies on the appropriate criteria for the use of force on inmates. ICE policy on the use of force also prohibits a facility to use Tasers at any time or dogs except when searching for contraband. However, we observed the potential for the use of Tasers and/or dogs at four of the adult detention facilities that we visited. At the Wakulla County Sheriff’s Office, we observed officers armed with Tasers. We interviewed one of these officers, who was not aware of ICE’s policy forbidding the use of Tasers on detainees. Therefore, if an incident occurs where he felt use of his Taser was warranted, it would be unlikely he would distinguish between an alien detainee and a jail inmate. At the North Las Vegas Detention Facility, officers told us that they use Tasers and the use of Tasers was noted in the facility’s Use of Force continuum. At Pine Prairie Correctional Center and York County Prison, the use of Tasers is authorized in policy. According to officials at the North Las Vegas detention facility, dogs may be potentially used in a “show of force” situation, but would not actually be deployed as a “use of force” on detainees. ICE standards require that all facilities offer rotating 5-week menus, special medical and religious meals when approved by medical staff or a chaplain, and that menus be reviewed by a nutritionist to ensure adequate caloric and nutritional intake. Further, ICE food service standards require that facility food service employees be instructed in food safety and that the facility be inspected by local, state, or ICE authorities. We observed deficiencies regarding the ICE Food Service Standard at two adult detention facilities and one juvenile detention facility that we visited. At the Denver Contract Detention Facility, an adult facility, ICE reviewers cited the facility for lack of cleanliness in its food service preparation and a 4-week rotating menu instead of the required 5-week menu. The Denver Contract Detention Facility received a deficient rating in sanitation from ICE reviewers in October 2006 because the kitchen area was not properly cleaned between meals. Figure 5 shows an unclean kitchen grill at the Denver Contract Detention Facility. At the San Diego Correctional Facility, an adult facility, PHS inspectors we accompanied reviewed the files of the detainee food service workers and found that two of the detainee workers had not been cleared to work in the kitchen. These workers were immediately removed from food service duty until such time as they receive the proper medical and security clearances. At the Cowlitz County Juvenile Detention Center, juveniles received only one hot meal per day rather than two hot meals per day as required by the ICE Juvenile Food Standard. Some detainees we spoke with expressed displeasure with the food, frequently citing that what they were served was not what they were accustomed to or that meals were served too early in the morning, and/or they did not have sufficient time to eat their meals. ICE detention standards state that detainees are to be allowed at least 5 hours of recreation per week. At facilities that we visited, common outdoor recreational activities included basketball and volleyball, and common indoor recreation activities included board games and television. The physical facilities for recreational activities varied, particularly the outdoor recreation facilities. ICE detention standards do not specify that outdoor recreation take place physically outside the detention facility. If only indoor recreation is available, detainees shall have access for at least 1 hour each day and shall have access to natural light. Some facilities we visited provided recreation through indoor areas with natural sunlight. For example, figures 6 and 7 show indoor/outdoor recreational areas with natural sunlight and fresh air ventilation for detainees at the Hampton Roads Regional Jail and Denver Contract Detention Facility. Each of the facilities we visited met the 5-hour-per-week recreation standard. ICE’s recreation standard states that if outdoor recreation is available at the facility, each detainee shall have access for at least 1 hour daily, at a reasonable time of day, 5 days a week, weather permitting. However, the Wakulla County Sheriff’s Office detainee handbook stated that detainees were allowed 3 hours of outdoor recreation per week. In commenting on our report, DHS stated, notwithstanding the handbook, that detainees receive outdoor recreation at Wakulla 5 days a week for a 1-hour period each day. According to the ICE legal access standard, which applies only to adult and family detention facilities, detainees shall be permitted access to a law library for at least 5 hours per week, be furnished legal materials, and be provided materials to facilitate detainees’ legal research and writing. Although not required, 18 of the 21 adult and family detention facilities visited provided detainees with at least one computer and legal software as an option to conduct research on immigration law and cases. Detainees at most facilities that we visited generally had access to legal materials. However, we observed a law library deficiency at the North Las Vegas Detention Center. At North Las Vegas, certain detainees did not have direct access to its law library because of the law library’s location. This is because low-risk detainees would have needed to pass through a dorm that housed high-risk detainees, thus violating ICE’s policies against comingling of risk levels. As a result, some detainees were required to submit a research plan to a designated detainee law clerk, who researches the information on their behalf. We discussed this policy with facility officials and they said that the current system was adopted to prevent commingling among detainees of different risk levels and reduce overcrowding within the law library. In June 2007, ICE officials stated that all detainees at this facility have access to computers loaded with a legal research database, which according to an ICE law librarian meets ICE’s requirements to provide legal research material. However, at the time of our visit to the North Las Vegas Detention Center, facility officials told us that they were only in the process of providing for a mobile law library cart with a computer loaded with legal research software. At the time of our visit to the facility, the cart/computer system was not available to the detainees. Figure 8 shows the law library available at the San Diego Correctional Facility law library. ICE’s detainee grievance standard states that facilities shall establish and implement procedures for informal and formal resolution of detainee grievances. The detainee grievance standard advocates resolving detainee grievances informally, before resorting to formal grievance processes. The formal grievance process permits detainees to file written grievances with the designated grievance officer, generally within 5 days of the event or unsuccessful resolution of an informal grievance. The standard also states that detainees must have the opportunity to file a complaint directly with the OIG, which we discuss later in the report. The standard also requires facilities to maintain a grievance log and outline grievance procedures in the detainee handbook. Our review of available grievance data obtained from facilities and discussions with facility management showed that the types of grievances at the facilities we visited typically included the lack of timely response to requests for medical treatment, missing property, high commissary prices, poor food quality and/or insufficient food quantity, high telephone costs, problems with telephones, and questions concerning detention case management issues. Four of the 23 facilities we visited did not comply with all aspects of ICE’s detainee grievance standards. Specifically, Casa de San Juan Family Shelter did not provide a handbook, the Cowlitz County Juvenile Detention Center did not include grievance procedures in its handbook, Wakulla County Sheriff’s Office did not have a grievance log, and the Elizabeth Detention Center did not record all grievances that we observed in their facility files. At 4 of the 23 detention facilities we visited, we observed that detainees were sleeping in portable beds on the floor in between standard beds and/or sleeping three persons to a two-person cell. These 4 detention facilities were the Krome Service Processing Center, the Denver Contract Detention Facility, the San Pedro Service Processing Center, and the San Diego Correctional Facility. At the time of our visit, the Krome Service Processing Center in Florida had a population of 750 detainees with a rated capacity of 572 detainees. Officials told us that the facility’s population had been as high as 1,000 detainees just 1 week prior to our visit. An official at that facility expressed concern about the limited amount of unencumbered space at the facility. Figure 9 shows the use of portable beds on the left side of the picture used to accommodate excess population at the Krome Service Processing Center. At the San Diego Correctional Facility, we observed that detainees were “triple-bunked”—three detainees in a cell built for two. For example, we counted 110 women housed in a dorm designed to house only 68 detainees. ICE and facility officials stated that overcrowding was a potential security and safety issue, and this concern was noted in ICE’s inspection report. The officials later informed us that they had developed a plan with recommendations to address overcrowding at the San Diego facility. According to these officials, they had submitted the plan to ICE headquarters officials for their approval. In January 2007, we contacted ICE officials for an update on the overcapacity issues at the San Diego Correctional Facility, and officials said that ICE had reduced the detainee population and was no longer triple-bunking detainees. We requested documentation in support of the San Diego facility’s new policy on overcrowding, but ICE said that it could not respond to our request due to pending litigation involving the San Diego facility. During our October 2006 visit to the Denver Contract Detention Facility, we also observed that detainees were sleeping in portable beds placed in the aisles between standard beds. The ICE Denver Field Office Director said that his field office has been requesting additional detention bed space within the region for some time and that his office considers overcrowding to be an issue of concern. He said that the portable beds that we observed are a measure to address overcapacity and that the Denver Contract Detention Facility needs to be expanded. Figure 10 shows the use of portable beds and overcapacity conditions at the Denver Contract Detention Facility. In addition to our observations, UNHCR officials, who monitor conditions of confinement at alien detention facilities, told us that they had observed overcrowded conditions at two facilities they visited in 2006. These facilities were the South Texas Detention Complex, Pearsall, Texas, and the Aguadilla Service Processing Center in Puerto Rico. ICE annual inspections of detention facilities are generally conducted on time and with the exception of the pro bono telephone system have identified deficiencies for corrective action. Weaknesses that we found in ICE’s compliance review process have resulted in ICE’s failure to identify telephone system problems at many facilities we visited. Specifically, ICE’s Detention Inspection Worksheet used by reviewers does not require that a reviewer check that detainees are able to make successful connections through the pro bono telephone system. Further, there was variation in how ICE reviewers addressed and reported telephone system problems. Moreover, in at least one case, the Wakulla County Sheriff’s Office, where phone system problems were identified by ICE reviewers, we observed the same problems nearly a year after the facility’s telephone standard compliance had been rated as “at risk.” We reviewed the most recently available ICE annual inspection reports for 20 of the 23 detention facilities that we visited. The 20 inspection reports showed that ICE reviewers had identified a total of 59 deficiencies. Many of the types of deficiencies noted in the ICE inspection reports were similar to those that we observed. For example, deficiencies included issues concerning staff-detainee communication, detainee transfers, access to legal materials, admission and release, recreation, food service, medical care, telephone access, special management unit, tool control, detainee classification system, and performance of security inspections. Additional information on these standards is included in appendix IV. The ICE inspection reports are to be forwarded to the cognizant ICE field office and the facility that was reviewed. For deficiencies that could take longer than 45 days to correct, the facility management is to file a plan of action documenting how the deficiency will be addressed. According to ICE policy, all 330 adult detention facilities, as well as the 19 juvenile and 3 family detention facilities, are required to be inspected at 12-month intervals to determine that they are in compliance with detention standards and to take corrective actions if necessary. As of November 30, 2006, according to ICE data, ICE had reviewed approximately 90 percent of detention facilities within the prescribed 12-month interval. To perform these compliance reviews, ICE headquarters has a Detention Inspection Unit that has a Unit Chief of Compliance, six staff officers, three support staff, and a private contractor consultant. In addition, 298 ICE field staff serve as detention compliance reviewers on a collateral basis. According to the Detention Management Control Program Policy, reviewers are provided written guidance for conducting compliance inspections. Subsequent to each annual inspection, a compliance rating report is to be prepared and sent to the Director of the Office of Detention and Removal or his representative within 14 days. The Director of the Office of Detention and Removal has 21 days to transmit the report to the field office directors and affected suboffices. Facilities receive one of five final ratings in their compliance report—superior, good, acceptable, deficient, or at risk. ICE officials reported that as of June 1, 2007, 16 facilities were rated “superior,” 60 facilities were rated “good,” 190 facilities were rated “acceptable,” 4 facilities were rated “deficient,” and no facilities were rated “at risk.” ICE officials stated that this information reflects completed reviews, and some reviews are currently in process and pending completion. Therefore, ICE could not provide information on the most current ratings for some facilities. The telephone access section of the review checklist guide that ICE teams use to conduct their annual detention facility compliance reviews only requires that the reviewer determine that detainees have access to operable phones to make calls. This is an example of an insufficient internal control because the checklist guide does not require the ICE review teams to check that detainees are able to successfully make connections through the pro bono telephone system. During our visits to detention facilities, we found that the phones were fully functional for pay calls but connections could not always be completed using the pro bono call system. Because inspectors did not test the pro bono telephone system during the review process to determine whether a call could actually be completed using that system, facilities could be and, in some instances were, certified as being in compliance with ICE’s telephone access standard when in reality the pro bono call system was not functioning and detainees were not able to make connections. For example, at one facility where we had determined that the pro bono telephone system would not allow us to make connections with consulates, the OIG complaint hotline, or local pro bono legal services, the senior ICE inspector on site considered the phone access standard had been met because he had observed detainees talking on the phones. In this instance, the ICE reviewer found that the facility was in compliance with the telephone access standard without actually verifying that a successful connection could be made through the pro bono telephone system. As a result, it is possible that many phone system problems had not been identified by ICE reviewers. As discussed above, we found telephone access compliance deficiencies to be pervasive at the detention facilities we visited. However, at 16 of 17 detention facilities that utilize the pro bono telephone system; the most recently available ICE inspection reports for these same facilities disclosed phone problems at only 5 of the 16 facilities. Of these 5 facilities, 3 were given a rating of “deficient” or “at risk” for compliance with the telephone access standard that required the facility to submit a plan of action to resolve the deficiency. For the other two facilities, the problems were listed as a “concern” rather than a deficiency and as such, did not require a plan of action. The examples below illustrate the variations in how the ICE reviewer in charge reported telephone compliance problems. Elizabeth Detention Facility: At this facility, our analysts accompanied the ICE team during its annual compliance inspection. Although we advised the ICE team, ICE facility staff, and contractor management that the phone listings were out of date and the pro bono telephone system was not operating properly, ICE’s final compliance report only addressed problems with outdated phone numbers and not the larger concern of telephone system problems. San Diego Correctional Facility: At San Diego, we also accompanied ICE reviewers who noted pro bono telephone system problems, including inaccurate and outdated phone lists and detainees being unable to make successful connections through the pro bono telephone system. In this case, the ICE reviewers detected the full range of telephone problems and reflected them in their final report, subsequently requiring the facility to a file a plan of action. Wakulla County Sheriff’s Office: We visited this facility in October 2006 and observed a full range of pro bono telephone system problems, including an inability to connect to consulates. Some of these problems were also cited during ICE’s 2005 compliance review at this facility. In this review ICE rated the Telephone Access Standard at this facility as “at risk,” requiring that a plan of action be filed within 30 days and that noted deficiencies be addressed within 90 days. Despite these requirements, no corrective action was evident during our visit nearly a year later. According to ICE officials, the ICE Office of Professional Responsibility is creating a Detention Facilities Inspection Group (DFIG) within its Management Inspections Unit to independently validate detention inspections conducted by DRO. ICE officials stated that DFIG will perform quality assurance over the review process, ensure consistency in application of detention standards, and verify corrective action. According to these officials, experienced staff were assigned from other ICE components to this unit in February 2007. In addition to the detainee grievance procedures at the detention facilities, external complaints may be filed by detainees or their representatives with several governmental and nongovernmental organizations, as shown in figure 11. The primary mechanism for detainees to file external complaints is directly with the OIG, either in writing or by phone using the OIG complaint hotline. Detainees may also file complaints with the DHS Office for Civil Rights and Civil Liberties, which has statutory responsibility for investigating complaints alleging violations of civil rights and civil liberties. In addition, detainees may file complaints through the Joint Intake Center (JIC), which is operated continuously by both ICE and U.S. Customs and Border Protection (CBP) personnel, and is responsible for receiving, classifying, and routing all misconduct allegations involving ICE and CBP employees, including those pertaining to detainee treatment. ICE officials told us that if the JIC were to receive an allegation from a detainee, it would be referred to the OIG. OIG may investigate the complaint or refer it to CRCL or DHS components such as the ICE Office of Professional Responsibility (OPR) for review and possible action. In turn, CRCL or OPR may retain the complaint or refer it to other DHS offices, including DRO, for possible action. Further, detainees may also file complaints with nongovernmental organizations such as ABA and UNHCR. These external organizations said they generally forward detainee complaints to DHS components for review and possible action. Of the approximately 1,700 detainee complaints in the OIG database that were filed in fiscal years 2003 through 2006, OIG investigated 173 and referred the others to other DHS components as displayed in figure 11. As discussed earlier, the OIG complaint hotline telephone number was blocked or otherwise restricted at 12 of the facilities that we visited. Therefore, while some detainees at these facilities may have filed written complaints with the OIG, the number of reported allegations may not reflect the universe of detainee complaints. OIG has a system to record the type of complaint and its status (e.g., open investigation, closed due to insufficient information, or referred). Our review of approximately 750 detainee complaints from fiscal years 2005 through 2006 showed that the complaints in the OIG database mostly involved issues relating to medical treatment; case management; mistreatment; protesting detention or deportation; civil rights, human rights, or discrimination; property issues; and employee misconduct at the facility. Other, less common complaints involved physical abuse, use of force, mismanagement, detainee-on-detainee violence, general abuse, food and commissary issues, general environmental concerns, and general harassment. One challenge faced by the OIG in investigating detainee complaints is that generally detainees do not stay in facilities for long periods of time, so the complainant may be relocated to another facility or returned to his or her country of origin before an investigation is initiated or completed. OPR investigates allegations of corruption and other official misconduct only upon formal declination to investigate and referral by the OIG. OPR classifies allegations under four categories based on severity of the allegation and may retain cases for investigation, or refer complaints to DHS components including DRO. OPR stated that in fiscal years 2003 through 2006, they had received 409 allegations concerning the treatment of detainees. Seven of these allegations were found to be substantiated, 26 unfounded, and 65 unsubstantiated. Three of the seven substantiated cases resulted in employee terminations, one resulted in an employee termination that is currently under appeal, and according to an OPR official, three cases were still being adjudicated. Additionally, 200 of the allegations were classified by OPR as either information only to facility management, requiring no further action, or were referred to facility management for action, requiring a response. CRCL also receives complaints from the OIG, nongovernmental organizations, and members of the public. It tracks this information in its complaint management system. Officials stated that from the period March 2003 to August 2006 they received 46 complaints related to the treatment of detainees. Of these 46 complaints, 14 were closed, 11 were referred to ICE OPR, 12 were retained for investigation, and 9 were pending decision about disposition. CRCL monitors the review of all referred complaints until conclusion. We could not determine the number of cases referred to DRO or their disposition. On the basis of a limited review of DRO’s complaints database and discussions with ICE officials knowledgeable about the database, we concluded that DRO’s complaint database was not sufficiently reliable for audit purposes. According to ICE, DRO’s complaints database is used as an internal managerial information system and is not designed to be a formal tracking mechanism. DRO is responsible for overseeing the management and operation of detention facilities. Therefore, it is important that DRO accurately document detainee complaints related to conditions of confinement to, among other things, inform its review teams and DRO management regarding the conditions at the facilities used to detain aliens. Moreover, our standards for internal control in the federal government call for clear documentation of transactions and events that is readily available for examination. Documentation would allow for analysis that may reflect potential systemic problems throughout the detention system. In addition to our fieldwork and interviews with DHS and ICE officials regarding compliance efforts in place for alien detention facilities, we reviewed 37 detention monitoring reports compiled by UNHCR from the period 1993 to 2006. These reports were based on UNHCR’s site visits and its discussions with ICE officials, facility staff, and detainee interviews, especially with asylum seekers. Some of the issues noted in UNHCR mission reports included inadequate access to legal materials, the lack of timely response to requests for medical service, questions about case management, the high cost of telephone calls, and problems connecting through the ICE pro bono telephone system. While ABA officials informed us that they do not keep statistics regarding complaints, on the basis of a review of their correspondence as of August 2006, they compiled a list of common detainee complaints received. Common complaints reported by ABA included the high cost of telephone calls and problems with the ICE detainee telephone system, delayed or nonarriving mail, insufficient or outdated law library materials and lack of access to law libraries, detainees housed with criminals and treated like criminals, lack of information about complaint and grievance procedures, medical and dental treatment complaints, unsanitary conditions, insufficient food, facility staff problems, and abuse by inmates or other detainees. Further, ABA data from January 2003 to February 2007 indicated that of the 1,032 correspondences it received, 710 involved legal issues, 226 involved conditions of confinement, 39 involved medical access, and 57 involved miscellaneous issues or were not categorized. While ICE annual inspection reviews of detention facilities noted various deficiencies in compliance with ICE’s standards, insufficient internal controls and weaknesses in ICE’s compliance review process resulted in ICE’s failure to identify telephone system problems that we found to be pervasive at most of the detention facilities we visited. The insufficient internal controls and weaknesses in the telephone access section of the review checklist contributed to ICE reviewers’ failure to identify these telephone system problems. Amendments to the checklist to include requirements to confirm that pro bono telephone call connections can be made successfully may provide for more consistent reporting of telephone problems. Also, insufficient internal controls at detention facilities for ensuring that posted pro bono telephone numbers were accurate resulted in some facilities having inaccurate or outdated number lists. Systemic problems with the pro bono telephone system may preclude detainees from reaching consulates, nongovernmental organizations, pro bono legal providers, and the OIG complaint hotline, as required in ICE’s National Detention Standards. Additionally, ICE’s limited monitoring of contractor performance data that indicated poor system performance is evidence of the need for improved internal controls and monitoring of the contract. ICE confirmed that the contractor did not comply with the terms and conditions of the contract and in June 2007 requested that the OIG review the extent of noncompliance with the terms and conditions of the contract. Given the problems with the current pro bono telephone system we found at 16 facilities we tested, it is also prudent to ensure detainees are aware of and have access to alternative means for completing calls to consulates, pro bono legal providers, and the OIG’s complaint line as required by ICE standards. Without sufficient internal control policies and procedures in place, ICE is unable to offer assurance that detainees can access legal services, file external grievances, and obtain assistance from their consulates. ICE’s lack of a formalized tracking process for documenting detainee complaints hinders its ability to (1) identify potential patterns of noncompliance that may be systemwide and (2) ensure that all detainee complaints are reviewed and acted upon if necessary. Because DRO is responsible for overseeing the management and operation of alien detention facilities, it is important that DRO accurately document detainee complaints related to conditions of confinement to, among other things, inform its review teams and DRO management regarding the conditions at the facilities and facilitate any required corrective action. Moreover, our standards for internal control in the federal government call for clear documentation of transactions and events that is readily available for examination. To ensure that detainees can make telephone calls to access legal services, report complaints, and obtain assistance from their respective consulates, as specified in ICE National Detention Standards and that all detainee complaints are reviewed and acted upon as necessary, we recommend that the Secretary of Homeland Security direct the Assistant Secretary for U.S. Immigration and Customs Enforcement to take the following actions: Amend the DRO compliance inspection process relating to the detainee telephone access standard to include: measures to ensure that facility and/or ICE staff frequently test to confirm that the ICE pro bono telephone system is functioning properly; revisions to ICE’s compliance review worksheet to require ICE reviewers, while conducting annual reviews of the telephone access standard at detention facilities, to test the detainee pro bono telephone system by attempting to connect calls and record any automated voice messages as to why the call is not being put through. Require the posting in detention facilities of instructions for alternative means for detainees to complete calls in the event that the ICE pro bono telephone system is not functioning properly. Direct ICE staff to establish procedures for identifying any changes to phone numbers available from EOIR, the Department of State, and the OIG and for promptly updating the pro bono telephone numbers posted in detention facilities. Establish supervisory controls and procedures, including appropriate staffing, to ensure that DRO and Office of Acquisitions staff are properly monitoring contractor performance. In regard to the contract with Public Communications Services, explore what recourse the government has available to it for contractor nonperformance. In competing a new telephone contract, ensure that the new contract contains adequate protections and recourse for the government in the event of contractor nonperformance. Develop a formal tracking system to ensure that all detainee complaints referred to DRO are reviewed and the disposition, including any corrective action, is recorded for later examination. We provided a draft of this report to DHS for review and comment. DHS provided written comments on June 25, 2007, which are presented in appendix V. In commenting on the draft report, DHS stated it agreed with our seven recommendations and identified corrective actions it has planned or under way to address the problems. With regard to several of our recommendations, DHS believed that its progress in implementing corrective actions merited our closing them. For example, by memo of the DRO Assistant Director for Management, effective immediately, ICE staff are to verify serviceability of all telephones in detainee housing units by conducting random calls to pre-programmed numbers posted on the pro bono and consulate lists. ICE staff also are to interview a sampling of detainees and review written detainee complaints regarding detainee telephone access. The field office directors are to ensure that all phones in all applicable facilities are tested on a weekly basis. This appears to be a step in the right direction; however, proper implementation and oversight of this initiative will be needed to resolve the issues we identified. While we are encouraged by DHS’s plans and actions designed to address the problems we identified in our report, we have not reviewed these plans and actions to determine whether they could resolve or have resolved the problems and thus will keep the recommendations open until these actions can be evaluated for sufficiency. DHS’s official comments also raised three issues that require some clarification of our findings. First, DHS stated that the deficiencies we identified generally do not illustrate a pattern of noncompliance with ICE National Detention Standards, but rather, are isolated incidents, the exception being telephone access. While it is true that the only pervasive problem we identified related to the telephone system—a problem later confirmed by ICE’s testing—we cannot state that the other deficiencies we identified in our visits were isolated. Our findings are based on a non- probability sample of 23 detention facilities that was not generalizable to all alien detention facilities. Moreover, we observed facility conditions at a point in time that could have been different before and after our visits. Second, DHS commented that GAO personnel stated in discussions that they did not test or validate the availability of some other means for detainees to make telephone connections when the detainee phones are unable to do so. This is not the case. We checked whether the facilities we visited offered alternative means to make telephone connections when the pro bono system was not working. With the exception of the Broward Transitional Center, we were not able to satisfy ourselves through interviews with facility officials and detainees that routine assistance was available to detainees to make pro bono calls when they were unable to make these calls on the telephones provided for this purpose. Third, DHS stated that it believes that figure 2 in our report, which shows low connection rates for the pro bono network, does not properly represent the number of calls that are not connected due to problems with the network or provider. DHS’s comments included contractor data that point out that a detainee may input a wrong number, hang up before completing the call process, or call a pro bono attorney after business hours. We acknowledge there could be a variety of reasons why some calls may not have been completed over the period we reported on. However, these additional data do not explain our own test results in which we could not complete calls using the pro bono calling system at 16 of the 17 facilities we tested. We also note that we invited detainees, facility personnel, and on-site ICE officials to attempt to make the same calls, and they confirmed the calls could not be completed. Further, after we brought the telephone deficiencies to their attention, ICE officials concluded that the telephone service contractor had not been in compliance with the terms and conditions of the contract. DHS also provided us with technical comments, which we considered and incorporated in the report where appropriate. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are acknowledged in appendix VI. We used a combinations of approaches and methodologies to meet our audit objectives to assess (1) the extent to which selected facilities comply with U.S. Immigration and Customs Enforcement (ICE) National Detention Standards, (2) how reviews are conducted to ensure compliance with National Detention Standards, and (3) what pertinent complaints and reports have been filed with the Department of Homeland Security (DHS) and ICE and external organizations monitoring the treatment of alien detainees. To meet our audit objective on the extent to which selected facilities comply with National Detention Standards, we visited a nonprobability sample of 23 detention facilities from September to November 2006. These facilities were selected on the basis of geographic diversity, facility type, and a cross section of different types of facility ratings. Three of the facilities were undergoing ICE headquarters compliance inspection reviews during our visits, and included the Elizabeth Detention Facility, the San Diego Correctional Facility, and the Denver Contract Detention Facility. Site visits focused on several detention standards affecting basic treatment of detainees, including telephone access, medical care, legal access, food, grievance and complaint procedures, use of force policies, recreation, and hold room policy. On the basis of our interviews with United Nations High Commissioner for Refugees (UNHCR), the American Bar Association (ABA), and the DHS Officer Inspector General (OIG) officials, we selected a nonprobability sample of these standards to reflect areas of concern and common complaints cited by these organizations. Our site visits ranged from 1 to 3 days. At each location, we collected information and made independent observations using a data collection instrument that we developed from the ICE reviewer checklist in consultation with ICE and nongovernmental organizations monitoring the treatment of alien detainees. Our data collection instrument was not the ICE compliance reviewer checklist in its entirety, but instead included key components drawn from the checklist that we believed to be relevant to the standards we reviewed. We used this checklist to determine if there were deficiencies in compliance with one or more aspects of selected ICE standards. For our site visits, we first requested a tour of the key detention facility operations, including housing unit areas, medical units, kitchens, hold rooms, special management or disciplinary units, recreational areas, law libraries, visitation areas, and educational classrooms when applicable. During these tours, we interviewed facility staff and ICE officers assigned to the facility. In addition, when presented with an opportunity, we interviewed individual detainees concerning their treatment at detention facilities. Due to the limited amount of time we had available at each facility and our desire to visit as many facilities as possible for this review, we did not conduct structured file reviews. However, we did review policies and procedures pertaining to detainee conditions of treatment and interviewed facility and ICE staff responsible for compliance with the standards that we reviewed. For those detention facility locations undergoing ICE compliance inspection reviews at the time our visit, we made use of our data collection instrument, but supplemented it by observing the compliance review process. For juvenile detention facilities, in addition to our data collection instrument, we also referred to ICE’s juvenile standards of treatment to guide our reviews of those facilities. For family shelters included in our review, our observations were based on both the juvenile standards and National Detention Standards for adults because no specific standards for family detention facilities exist. Once problems with detainee access to pro bono numbers were identified, we developed a structured telephone test instrument to provide uniformity with our observations across the sites visited. The detention sites visited are: Berks County Prison Berks County Youth Shelter Berks Family Shelter York County Prison Elizabeth Detention Facility Washington Field Office Pamunkey Regional Jail Hampton Roads Regional Jail West Baton Rouge Parish Detention Center Bureau of Prisons Oakdale Federal Detention Center Pine Prairie Correctional Center Krome Service Processing Center Broward Transitional Center Wakulla County Sheriff’s Office Los Angeles Field Office San Pedro Service Processing Center North Las Vegas Detention Center El Centro Service Processing Center San Diego Correctional Facility Casa De San Juan Family Shelter Denver Contract Detention Facility Seattle Field Office Northwest Detention Center Cowlitz County Juvenile Detention Center T. Don Hutto Family Facility Laredo Contract Detention Facility To describe how reviews are conducted to ensure compliance with National Detention Standards, we interviewed DHS and ICE officials and analyzed documentation on staffing levels for ensuring compliance with alien detention standards, training provided to staff to ensure compliance with the standards, and processes in place to ensure compliance with the standards. To assess what pertinent complaints and reports have been filed with DHS and ICE and external organizations monitoring the treatment of alien detainees, we analyzed data on detainee complaints from the DHS OIG complaint database from fiscal years 2005 through 2006 using content analysis. For our content analysis, we reviewed about 750 detainee complaints and categorized them by type to be able to characterize common types of detainee complaints received by the OIG. We also obtained and reviewed information on the number and types of detainee complaints received from components within DHS and ICE for the period fiscal years 2003-2006 as well as UNHCR for the period 1993 through 2006 and the ABA for the period January 2003 through February 2007. We did not independently assess the merits of detainee complaints. Also, we did not determine if any corrective actions suggested for the detention facilities as a result of Office of Detention and Removal (DRO) reviews or detainee complaints were implemented. To assess the reliability of OIG detainee complaint data for the period fiscal years 2005 through 2006, we reviewed existing information about the data and the system that produced them and interviewed agency officials knowledgeable about the data. On the basis of our review of this information and these discussions, we determined the OIG data to be sufficiently reliable for our purposes. Regarding the DRO complaints database, we reviewed existing information about the data and the system that produced them and interviewed agency officials knowledgeable about the data. On the basis of our review of this information and these discussions, we determined that this information was not sufficiently reliable for audit purposes. In regard to other data sources that we reviewed, on the basis of our discussions with agency officials knowledgeable about the data, we also determined that these sources were sufficiently reliable for purposes of our review. In the case of contractor performance data, we could not independently verify the accuracy of the data, but did corroborate these data with other sources and determined they were reliable for our purposes. Our work was conducted from May 2006 through May 2007 in accordance with generally accepted government auditing standards. While we focused our review on DHS and ICE, we also contacted officials at the Departments of Justice, Health and Human Services, and State to discuss other issues related to alien detention, such as the custody of ICE alien detainees at the Bureau of Prisons facilities, the care of juvenile aliens in Health and Human Services custody, and the treatment of refugees and asylum seekers in detention. According to ICE data, the average length of stay in ICE adult detention custody for fiscal year 2007 as of April 2007 was 37.6 days. As of April 30, 2007, ICE reported that 25 percent of all detained aliens are removed within 4 days, 50 percent within 18 days, 75 percent within 44 days, 90 percent within 85 days, 95 percent within 126 days, and 98 percent within 210 days. According to ICE officials, there are many variables in the time equation for length of stay at a detention facility, including travel document requirements, political conditions, and airline conditions to country of origin. Figure 12 shows the breakdown between criminal and noncriminal aliens in detention. Figure 13 shows the breakdown of criminal charges by criminal aliens. In fiscal year 2006, ICE was funded at an authorized bed level of 20,800. For fiscal year 2007, ICE received funding for 27,500 bed spaces. ICE has requested appropriations to fund 28,450 detention bed spaces in fiscal year 2008. As noted in ICE’s ENDGAME: Office of Detention and Removal Strategic Plan, 2003-2012, the demand for detention has grown much faster than available federal bed space, causing an increased reliance on local jails to house detainees. ICE stated that this factor is critical because DRO has more stringent jail standards than other entities, limiting the number of jails that it can use. The Office of the National Juvenile Coordination Unit within U.S. Immigration and Customs Enforcement’s Office of Detention and Removal provides oversight and policy guidance to ICE/DRO field offices and field office juvenile coordinators nationwide on issues related to juveniles and families in detention. Within DRO, there are separate standards governing juvenile, family, and adult detention facilities. These standards were developed to ensure proper and safe housing of juveniles and families. Specifically, the family shelter standards are tailored to a unique population and encompass both the Juvenile Detention Standards as well as ICE National Detention Standards for adult aliens. Furthermore, the Minimum Standards for ICE Secure and Shelter Juvenile Detention Facilities are based on American Correctional Association standards and were developed with input from ICE DRO and the American Bar Association and include program requirements contained in the Flores court case settlement. The Juvenile Detention Standards reflect the needs of the juvenile population and include such areas as access to court and legal counsel, educational and vocational training, and medical services and visitation. Currently, ICE has 19 juvenile facilities and 3 family shelters. According to ICE policy, all juvenile secure detention and shelter facilities and family shelter facilities are required to be inspected at 12-month intervals. As with adult detention facilities, the Detention Management Control Program policies and procedures govern the review of ICE juvenile and family facilities. Reviews are conducted through the use of structured review worksheets. Facilities receive one of five final ratings upon review— superior, good, acceptable, deficient, and at risk. According to the National Juvenile Coordinator, juvenile facilities rated deficient and at-risk are immediately reviewed by DRO headquarters to determine the suitability of use for placement of ICE juveniles. Our observations of deficiencies in compliance with ICE standards at juvenile and family shelters are discussed in the body of the report. Figures 14, 15, and 16 show examples of juvenile and family shelter facilities. In addition to the contact listed above, William W. Crocker III, Assistant Director; Minty M. Abraham; Frances A. Cook; Katherine M. Davis; Dorian R. Dunbar; Cindy K. Gilbert; Lemuel N. Jackson; Robert D. Lowthian; Victoria E. Miller; and William T. Woods made key contributions to this report.
The total number of aliens detained per year by the Department of Homeland Security's (DHS) U.S. Immigration and Customs Enforcement (ICE) increased from about 95,000 in fiscal year 2001 to 283,000 in 2006. The care and treatment of these detained aliens is a significant challenge to ICE. GAO was asked to review ICE's implementation of its detention standards for aliens in its custody. GAO reviewed (1) detention facilities' compliance with ICE's detention standards, (2) ICE's compliance review process, and (3) how detainee complaints regarding conditions of confinement are handled. To conduct its work, GAO reviewed DHS documents, interviewed program officials, and visited 23 detention facilities of varying size, type, and geographic location. GAO's observations at 23 alien detention facilities showed systemic telephone system problems at 16 of 17 facilities that use the pro bono telephone system, but no pattern of noncompliance for other standards GAO reviewed. At facilities that use the ICE detainee pro bono telephone system, GAO encountered significant problems in making connections to consulates, pro bono legal providers, or the DHS Office of the Inspector General (OIG) complaint hotline. Monthly performance data provided by the phone system contractor indicates the rate of successful connections through the detainee pro bono telephone system was never above 74 percent. ICE officials stated there was little oversight of the telephone contract. In June 2007, ICE requested an OIG audit of the contract,stating that the contractor did not comply with the terms and conditions of the contract. Other instances of deficiencies GAO observed varied across facilities visited but did not appear to show a pattern of noncompliance. These deficiencies involved medical care, use of hold rooms, use of force, food service, recreational opportunities, access to legal materials, facility grievance procedures, and overcrowding. ICE annual compliance reviews of detention facilities identified deficiencies similar to those found by GAO. However, insufficient internal controls and weaknesses in ICE's compliance review process resulted in ICE's failure to identify telephone system problems at most facilities GAO visited. ICE's inspection worksheet used by its detention facility reviewers did not require that a reviewer confirm that detainees are able to make successful connections through the detainee pro bono telephone system. Detainee complaints may be filed with several governmental and nongovernmental organizations. Detainee complaints mostly involved legal access, conditions of confinement, property issues, human and civil rights, medical care, and employee misconduct at the facility. The primary way for detainees to file complaints is to contact the OIG. OIG investigates the most serious complaints and refers the remainder to other DHS components.
In March 2014 and April 2015, we reported that CBP had made progress in deploying programs under the Arizona Border Surveillance Technology Plan, but that CBP could take additional action to strengthen its management of the Plan and the Plan’s programs. As of May 2016, CBP has initiated or completed deployment of technology to Arizona for each of the programs under the Plan. Additionally, as discussed further below, CBP has reported taking steps to update program schedules and life- cycle cost estimates for the three highest-cost programs under the Plan. For example, in May 2016, CBP provided us with complete schedules for two of the programs, and we will be reviewing them to determine the extent to which they address our recommendation. In March 2014, we found that CBP had a schedule for deployment of each of the Plan’s seven programs, and that four of the programs would not meet their originally planned completion dates. We also found that some of the programs had experienced delays relative to their baseline schedules, as of March 2013. Further, in our March 2016 assessment of DHS’s major acquisitions programs, we reported on the status of the Plan’s Integrated Fixed Tower (IFT) program, noting that from March 2012 to January 2016, the program’s initial and full operational capability dates had slipped. Specifically, we reported that the initial operational capability date had slipped from the end of September 2013 to the end of September 2015, and the full operational capability to the end of September 2020. We also reported that this slippage in initial operational capability dates had contributed to slippage in the IFT’s full operational capability—primarily as a result of funding shortfalls––and that the IFT program continued to face significant funding shortfalls from fiscal year 2016 to fiscal year 2020. Despite these delays, as of May 2016 CBP reported that it has initiated or completed deployment of technology to Arizona for each of the three highest-cost programs under the plan—IFT, the Remote Video Surveillance System (RVSS), and the Mobile Surveillance Capability (MSC). Specifically, CBP officials stated that MSC deployments in Arizona are complete and that in April 2016, requirements to transition sustainment from the contractor to CBP had been finalized. CBP also reported that the RVSS system has been deployed, and testing on these systems is ongoing in four out of five stations. Further, CBP reported it had initiated deployment of the IFT systems and as of May 2016 has deployed 7 out of 53 IFTs in one area of responsibility. CBP conditionally accepted the system in March 2016 and is working to deploy the remaining IFT unit systems to other areas in the Tucson sector. With regard to schedules, we previously reported that CBP had at least partially met the four characteristics of reliable schedules for the IFT and RVSS schedules and partially or minimally met the four characteristics for the MSC schedule. Scheduling best practices are summarized into four characteristics of reliable schedules—comprehensive, well constructed, credible, and controlled (i.e., schedules are periodically updated and progress is monitored). We assessed CBP’s schedules as of March 2013 for the three highest-cost programs and reported in March 2014 that schedules for two of the programs at least partially met each characteristic (i.e., satisfied about half of the criterion), and the schedule for the other program at least minimally met each characteristic (i.e., satisfied a small portion of the criterion). For example, the schedule for the IFT program partially met the characteristic of being credible in that CBP had performed a schedule risk analysis for the program, but the risk analysis did not include the risks most likely to delay the project or how much contingency reserve was needed. For the MSC program, the schedule minimally met the characteristic of being controlled in that it did not have valid baseline dates for activities or milestones by which CBP could track progress. We recommended that CBP ensure that scheduling best practices are applied to the IFT, RVSS, and MSC schedules. DHS concurred with the recommendation and stated that CBP planned to ensure that scheduling best practices would be applied, as outlined in our schedule assessment guide, when updating the three programs’ schedules. In May 2016, CBP provided us with complete schedules for the IFT and RVSS programs, and we will be reviewing them to determine the extent to which they address our recommendation. In March 2014, we also found that CBP had not developed an Integrated Master Schedule for the Plan in accordance with best practices. Rather, CBP had used separate schedules for each program to manage implementation of the Plan, as CBP officials stated that the Plan contains individual acquisition programs rather than integrated programs. However, collectively these programs are intended to provide CBP with a combination of surveillance capabilities to be used along the Arizona border with Mexico, and resources are shared among the programs. According to scheduling best practices, an Integrated Master Schedule is a critical management tool for complex systems that involve a number of different projects, such as the Plan, to allow managers to monitor all work activities, how long activities will take, and how the activities are related to one another. We concluded that developing and maintaining an Integrated Master Schedule for the Plan could help provide CBP a comprehensive view of the Plan and help CBP better understand how schedule changes in each individual program could affect implementation of the overall plan. We recommended that CBP develop an Integrated Master Schedule for the Plan. CBP did not concur with this recommendation and maintained that an Integrated Master Schedule for the Plan in one file undermines the DHS-approved implementation strategy for the individual programs making up the Plan, and that the implementation of this recommendation would essentially create a large, aggregated program, and effectively create an aggregated “system of systems.” DHS further stated that a key element of the Plan has been the disaggregation of technology procurements. However, as we noted in the 2014 report, collectively these programs are intended to provide CBP with a combination of surveillance capabilities to be used along the Arizona border with Mexico. Moreover, while the programs themselves may be independent of one another, the Plan’s resources are being shared among the programs. We continue to believe that developing an Integrated Master Schedule for the Plan is needed. Developing and maintaining an integrated master schedule for the Plan could allow CBP insight into current or programmed allocation of resources for all programs as opposed to attempting to resolve any resource constraints for each program individually. In addition, in March 2014, we reported that the life-cycle cost estimates for the Plan reflected some, but not all, best practices. Cost-estimating best practices are summarized into four characteristics—well documented, comprehensive, accurate, and credible. Our analysis of CBP’s estimate for the Plan and estimates completed at the time of our review for the two highest-cost programs—the IFT and RVSS programs— showed that these estimates at least partially met three of these characteristics: well documented, comprehensive, and accurate. In terms of being credible, these estimates had not been verified with independent cost estimates in accordance with best practices. We concluded that ensuring that scheduling best practices were applied to the programs’ schedules and verifying life-cycle cost estimates with independent estimates could help better ensure the reliability of the schedules and estimates, and we recommended that CBP verify the life-cycle cost estimates for the IFT and RVSS programs with independent cost estimates and reconcile any differences. DHS concurred with this recommendation, but stated then that it did not believe that there would be a benefit in expending funds to obtain independent cost estimates and that if the costs realized to date continued to hold, there may be no requirement or value added in conducting full-blown updates with independent cost estimates. We recognize the need to balance the cost and time to verify the life-cycle cost estimates with the benefits to be gained from verification with independent cost estimates. CBP officials stated that in fiscal year 2016, DHS’s Cost Analysis Division would begin piloting DHS’s independent cost estimate capability on the RVSS program. According to CBP officials, this pilot is an opportunity to assist DHS in developing its independent cost estimate capability and that CBP selected the RVSS program for the pilot because the program is at a point in its planning and execution process where it can benefit most from having an independent cost estimate performed as these technologies are being deployed along the southwest border, beyond Arizona. CBP officials stated that details for an estimated independent cost estimate schedule and analysis plan for the RVSS program have not been finalized. CBP plans to provide an update on the schedule and analysis plan as additional details become available, and provide information on the final reconciliation of the independent cost estimate and the RVSS program cost estimate once the pilot has been completed at the end of fiscal year 2017. Further, CBP officials have not detailed similar plans for the IFT. We continue to believe that independently verifying the life-cycle cost estimates for the IFT and RVSS programs and reconciling any differences, consistent with best practices, could help CBP better ensure the reliability of the estimates. We reported in March 2014 that CBP had identified mission benefits of its surveillance technologies to be deployed under the Plan, such as improved situational awareness and agent safety. However the agency had not developed key attributes for performance metrics for all surveillance technologies to be deployed as part of the Plan, as we recommended in November 2011. Further, in March 2014, we found that CBP did not capture complete data on the contributions of these technologies, which in combination with other relevant performance metrics or indicators, could be used to better determine the impact of CBP’s surveillance technologies on CBP’s border security efforts, and inform resource allocation decisions. Although CBP had a field within its Enforcement Integrated Database for data on whether technological assets, such as SBInet surveillance towers, and nontechnological assets, such as canine teams, assisted or contributed to the apprehension of illegal entrants and seizure of drugs and other contraband, according to CBP officials, Border Patrol agents were not required to record these data. This limited CBP’s ability to collect, track, and analyze available data on asset assists to help monitor the contribution of surveillance technologies, including its SBInet system, to Border Patrol apprehensions and seizures and inform resource allocation decisions. We recommended that CBP require data on asset assists to be recorded and tracked within its database, and once these data were required to be recorded and tracked, that it analyze available data on apprehensions and technological assists— in combination with other relevant performance metrics or indicators, as appropriate— to determine the contribution of surveillance technologies to CBP’s border security efforts. CBP concurred with our recommendations and has implemented one of them. Specifically, in June 2014, CBP issued guidance informing Border Patrol agents that the asset assist data field within its database was now a mandatory data field. Agents are required to enter any assisting surveillance technology or other equipment before proceeding. Further, as of May 2015, CBP had identified a set of potential key attributes for performance metrics for all technologies to be deployed under the Plan. However, CBP officials stated that this set of performance metrics was under review as the agency continued to refine the key attributes for metrics to assess the contributions and impacts of surveillance technology on its border security mission. In our March 2016 update on the progress made by agencies to address our findings on duplication and cost savings across the federal government, we reported that CBP had modified its time frame for developing baselines for each performance measure and that additional time would be needed to implement and apply key attributes for metrics. According to CBP officials, CBP expected these performance measure baselines to be developed by the end of calendar year 2015, at which time the agency planned to begin using the data to evaluate the individual and collective contributions of specific technology assets deployed under the Plan. Moreover, CBP planned to use the baseline data to establish a tool that explains the qualitative and quantitative impacts of technology and tactical infrastructure on situational awareness in specific areas of the border environment by the end of fiscal year 2016. While CBP had expected to complete its development of baselines for each performance measure by the end of calendar year 2015, as of March 2016 the actual completion is being adjusted pending test and evaluation results for recently deployed technologies on the southwest border. Until CBP completes its efforts to fully develop and apply key attributes for performance metrics for all technologies to be deployed under the Plan, it will not be well positioned to fully assess its progress in implementing the Plan and determining when mission benefits have been fully realized. Our ongoing work shows that as of May 2016, CBP operates nine Predator B from four AMO National Air Security Operations Centers (NASOC) located in Sierra Vista, Arizona; Grand Forks, North Dakota; Corpus Christi, Texas; and Jacksonville, Florida. Three Predator B aircraft are assigned to the NASOCs in Arizona, North Dakota, and Texas while the NASOC in Florida remotely operates Predator B aircraft launched from the other NASOCs. AMO began operation of Predator B aircraft in fiscal year 2006, and all four NASOCs became operational in fiscal year 2011. See figure 1 for a photograph of a CBP Predator B aircraft. CBP’s Predator B aircraft may be equipped with video and radar sensors utilized primarily to support the operations of other CBP components, and federal, state, and local law enforcement agencies. CBP’s Predator B operations in support of its components and other law enforcement agencies include patrol missions to detect the illegal entry of goods and people at and between U.S. POEs and investigative missions to provide aerial support for law enforcement activities and investigations. For example, CBP’s Predator B video and radar sensors support Border Patrol activities to identify and apprehend individuals entering the United States between POEs. CBP collects and tracks information on the number of assists provided for apprehensions of individuals and seizures of contraband, including narcotics, in support of law enforcement operations by Predator B aircraft. In addition, CBP’s Predator B aircraft have been deployed to provide aerial support for monitoring natural disasters such as wildfires and floods. For example, CBP’s Predator B were deployed in 2010 and 2011 to support federal, state, and local government agencies in response to flooding in the Red River Valley area of North Dakota. CBP’s Predator B aircraft operate in the U.S. national airspace system in accordance with Federal Aviation Administration (FAA) requirements for authorizing all UAS operations in the National Airspace System. In accordance with FAA requirements, all Predator B flights must comply with a Certificate of Waiver or Authorization (COA). The COA-designated airspace establishes operational corridors for Predator B activity both along and within 100 miles of the border for the northern border, and along and within 25 to 60 miles of the border for the southern border, exclusive of urban areas. COAs issued by FAA to CBP also include airspace for training missions which involve take offs and landings around a designated NASOC and transit missions to move Predator B aircraft between NASOCs. As of May 2016, CBP has utilized the NASOC in North Dakota as a location to train new and existing CBP Predator B pilots. For our ongoing work, we analyzed CBP data on reported Predator B COA-designated flight hours from fiscal years 2011 to 2015 and found that 81 percent of flight hours were associated with COA-designated airspace along border and coastal areas. For more information on Predator B flight hours in COA-designated airspace, see figure 2. Based on our ongoing work, we found that airspace access and weather can impact CBP’s ability to utilize Predator B aircraft. According to CBP officials we spoke with in Arizona, Predator B flights may be excluded from restricted airspace managed by the Department of Defense along border areas which can affect the ability of Predator B to support Border Patrol. CBP officials we spoke with in Arizona and Texas told us that Predator B missions are affected by hazardous weather conditions that can affect their ability to operate the aircraft. According to CBP officials we spoke with in Texas, CBP took steps to mitigate the impact of hazardous weather in January and February 2016 by deploying one Predator B aircraft from Corpus Christi, Texas, to San Angelo, Texas, at San Angelo Regional Airport which had favorable weather conditions. CBP’s deployment of a Predator B at San Angelo Regional Airport was in accordance with a FAA-issued COA to conduct its border security mission in Texas and lasted approximately 3 weeks. We plan to evaluate how these factors affect CBP’s utilization of Predator B aircraft as part of our ongoing work. Our ongoing work shows that as of May 2016, CBP has deployed six tactical aerostats along the U.S.-Mexico border in south Texas to support Border Patrol. Specifically, CBP deployed five tactical aerostats in Border Patrol’s Rio Grande Valley sector and one tactical aerostat In Laredo sector. CBP utilizes three types of tactical aerostats equipped with cameras for capturing full-motion video: Persistent Threat Detection System (PTDS), Persistent Ground Surveillance System (PGSS), and Rapid Aerostat Initial Deployment (RAID). Each type of tactical aerostat varies in size and altitude of operation. See figure 3 for a photograph of a RAID aerostat. CBP owns the RAID aerostats and leases PTDS and PGSS aerostats through the Department of Defense. CBP operates its tactical aerostats in accordance with FAA regulations through the issuance of a COA. Tactical aerostats were first deployed and evaluated by CBP in August 2012 in south Texas. CBP’s Office of Technology Innovation and Acquisition manages aerostat technology and the operation of each site through contracts, while Border Patrol agents operate tactical aerostat cameras and provide security at each site. As of May 2016, Border Patrol has taken actions to track the contribution of tactical aerostats to its mission activities. Specifically, agents track and record the number of assists aerostats provide for apprehensions of individuals and seizures of contraband and narcotics. Based on our ongoing work, we found that airspace access, weather, and real estate can impact CBP’s ability to deploy and utilize tactical aerostats in south Texas. Airspace access: aerostat site placement is subject to FAA approval to ensure the aerostat does not converge on dedicated flight paths. Weather: aerostat flight is subject to weather restrictions, such as hazardous weather involving high winds or storms. Real estate: aerostat sites utilized by CBP involve access to private property and land owner acceptance, and right of entry is required prior for placement. In addition, CBP must take into consideration any relevant environmental and wildlife impacts prior to deployment of a tactical aerostat, such as flood zones, endangered species, migratory animals, among others. We plan to evaluate how these factors affect CBP’s utilization of tactical aerostats as part of our ongoing work. Chairwoman McSally, Ranking Member Vela, and members of the subcommittee, this concludes my prepared statement. I will be happy to answer any questions you may have. For further information about this testimony, please contact Rebecca Gambler at (202) 512-8777 or [email protected]. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement included Kirk Kiester (Assistant Director), as well as Jeanette Espinola, Yvette Gutierrez, Amanda Miller, Jon Najmi, and Carl Potenzieri. 2016 Annual Report: Additional Opportunities to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits. GAO-16-375SP. Washington, D.C.: April 13, 2016. Homeland Security Acquisitions: DHS Has Strengthened Management, but Execution and Affordability Concerns Endure. GAO-16-338SP. Washington, D.C.: March 31, 2016. Southwest Border Security: Additional Actions Needed to Assess Resource Deployment and Progress. GAO-16-465T. Washington, D.C.: March 1, 2016. GAO Schedule Assessment Guide: Best Practices for Project Schedules. GAO-16-89G. Washington, D.C.: December 2015. Border Security: Progress and Challenges in DHS’s Efforts to Implement and Assess Infrastructure and Technology. GAO-15-595T. Washington, D.C.: May 13, 2015. Homeland Security Acquisitions: Addressing Gaps in Oversight and Information is Key to Improving Program Outcomes. GAO-15-541T. Washington, D.C.: April 22, 2015. Homeland Security Acquisitions: Major Program Assessments Reveal Actions Needed to Improve Accountability. GAO-15-171SP. Washington, D.C.: April 22, 2015. 2015 Annual Report: Additional Opportunities to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits. GAO-15-404SP. Washington, D.C.: April 14, 2015. Border Security: Additional Efforts Needed to Address Persistent Challenges in Achieving Radio Interoperability. GAO-15-201. Washington, D.C.: March 23, 2015. Unmanned Aerial Systems: Department of Homeland Security’s Review of U.S. Customs and Border Protection’s Use and Compliance with Privacy and Civil Liberty Laws and Standards GAO-14-849R. Washington, D.C.: September 30, 2014. Arizona Border Surveillance Technology Plan: Additional Actions Needed to Strengthen Management and Assess Effectiveness. GAO-14-411T. Washington, D.C.: March 12, 2014. Arizona Border Surveillance Technology Plan: Additional Actions Needed to Strengthen Management and Assess Effectiveness. GAO-14-368. Washington, D.C.: March 3, 2014. Border Security: Progress and Challenges in DHS Implementation and Assessment Efforts. GAO-13-653T. Washington, D.C.: June 27, 2013. Border Security: DHS’s Progress and Challenges in Securing U.S. Borders. GAO-13-414T. Washington, D.C.: March 14, 2013. Border Security: Opportunities Exist to Ensure More Effective Use of DHS’s Air and Marine Assets. GAO-12-518. Washington, D.C.: March 30, 2012. U.S. Customs and Border Protection’s Border Security Fencing, Infrastructure and Technology Fiscal Year 2011 Expenditure Plan. GAO-12-106R. Washington, D.C.: November 17, 2011. Arizona Border Surveillance Technology: More Information on Plans and Costs Is Needed before Proceeding. GAO-12-22. Washington, D.C.: November 4, 2011. GAO Cost Estimating and Assessment Guide: Best Practices for Developing and Managing Capital Program Costs. GAO-09-3SP. Washington, D.C.: March 2009. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
CBP employs surveillance technologies, UAS, and other assets to help secure the border. For example, in January 2011, CBP developed the Arizona Border Surveillance Technology Plan, which includes seven acquisition programs related to fixed and mobile surveillance systems, among other assets. CBP has also deployed UAS, including Predator B aircraft, as well as tactical aerostats to help secure the border. In recent years, GAO has reported on a variety of CBP border security programs and operations. This statement addresses (1) GAO findings on DHS's efforts to implement the Arizona Border Surveillance Technology Plan and (2) preliminary observations related to GAO's ongoing work on CBP's use of UAS and tactical aerostats for border security. This statement is based on GAO products issued from November 2011 through April 2016, along with selected updates conducted in May 2016. For ongoing work related to UAS, GAO reviewed CBP documents and analyzed Predator B flight hour data from fiscal years 2011 through 2015, the time period when all Predator B centers became operational. GAO also conducted site visits in Texas and Arizona to view operation of Predator B aircraft and tactical aerostats and interviewed CBP officials responsible for these operations. GAO reported in March 2014 and April 2015 that U.S. Customs and Border Protection (CBP), within the Department of Homeland Security (DHS), had made progress in deploying programs under the Arizona Border Surveillance Technology Plan (the Plan), but could take additional actions to strengthen its management of the Plan and its related programs. Specifically, in March 2014 GAO reported that CBP's schedules and life-cycle cost estimates for the Plan and its three highest-cost programs—which represented 97 percent of the Plan's total estimated cost—met some but not all best practices. GAO recommended that CBP ensure that its schedules and cost estimates more fully address best practices, such as validating cost estimates with independent estimates, and DHS concurred. As of May 2016, CBP has initiated or completed deployment of technology for each of the three highest-cost programs under the Plan, and reported updating some program schedules and cost estimates. For example, in May 2016, CBP provided GAO with complete schedules for two of the programs, and GAO will be reviewing them to determine the extent to which they address GAO's recommendation. GAO also reported in March 2014 that CBP had identified mission benefits of technologies under the Plan, such as improved situational awareness, but had not developed key attributes for performance metrics for all technologies, as GAO recommended in November 2011. As of May 2015, CBP had identified a set of potential key attributes for performance metrics for deployed technologies and expected to complete its development of baselines for measures by the end of 2015. In March 2016, GAO reported that CBP was adjusting the completion date to incorporate pending test and evaluation results for recently deployed technologies under the Plan. GAO's ongoing work on CBP's use of unmanned aerial systems (UAS) for border security shows that CBP operates nine Predator B aircraft in U.S. airspace in accordance with Federal Aviation Administration (FAA) requirements. Specifically, CBP's Air and Marine Operations operates the aircraft in accordance with FAA certificates of waiver or authorization for a variety of activities, such as training flights and patrol missions to support the U.S. Border Patrol's (Border Patrol) efforts to detect and apprehend individuals illegally crossing into the United States between ports of entry. Predator B aircraft are currently equipped with a combination of video and radar sensors that provide information on cross-border illegal activities to supported agencies. CBP data show that over 80 percent of Predator B flight hours were in airspace encompassing border and coastal areas from fiscal years 2011 through 2015. CBP officials stated that airspace access and hazardous weather can affect CBP's ability to utilize Predator B aircraft for border security activities. GAO's ongoing work shows that CBP has deployed six tactical aerostats—relocatable unmanned buoyant craft tethered to the ground and equipped with cameras for capturing full-motion video—along the U.S.-Mexico border in south Texas to support Border Patrol. CBP operates three types of tactical aerostats, which vary in size and altitude of operation. CBP officials reported that airspace access, hazardous weather, and real estate (e.g., access to private property) can affect CBP's ability to deploy and utilize tactical aerostats. Border Patrol has taken actions to track the contribution of tactical aerostats to its mission activities. GAO has previously made recommendations to DHS to improve its management of plans and programs for surveillance technologies and DHS generally agreed.
DHS approached the 9/11 Commission Act requirement for a quadrennial homeland security review in three phases. In the first phase, DHS defined the nation’s homeland security interests, identified the critical homeland security missions, and developed a strategic approach to those missions by laying out the principal goals, objectives, and strategic outcomes for the mission areas. DHS reported on the results of this effort in the February 2010 QHSR report in which the department identified 5 homeland security missions, 14 associated goals, and 43 objectives, as shown in figure 1. The QHSR report also identified a strategy for maturing and strengthening the homeland security enterprise, with 18 associated objectives. In the second phase—the BUR—DHS identified its component agencies’ activities, aligned those activities with the QHSR missions and goals, and made recommendations for improving the department’s organizational alignment and business processes. DHS reported on the results of this second phase in the July 2010 BUR report. In the third phase DHS developed its budget plan necessary to execute the QHSR missions. DHS presented this budget plan in the President’s fiscal year 2012 budget request, issued February 14, 2011, and the accompanying Fiscal Year 2012-2016 FYHSP, issued in May 2011. DHS officials stated that together, these three phases and their resulting reports and documents address the 9/11 Commission Act requirement for the quadrennial homeland security review. DHS initiated the QHSR in August 2007. Led by the DHS Office of Policy, in July 2009 the department issued its QHSR terms of reference, outlining the framework for conducting the quadrennial review and identifying threats and assumptions to be used in conducting the review. Through the terms of reference, DHS identified the initial four homeland security missions to be studied, which were refined during the QHSR process— Counterterrorism and Domestic Security Management; Securing Our Borders; Smart and Tough Enforcement of Immigration Laws; and Preparing For, Responding To, and Recovering from Disasters—as well as three other separate, nonmission study areas to be part of the review—DHS Strategic Management, Homeland Security National Risk Assessments, and Homeland Security Planning and Capabilities. The fifth QHSR mission on Safeguarding and Securing Cyberspace was added after DHS issued the terms of reference. A sixth category of DHS activities—Providing Essential Support to National and Economic Security—was added in the fiscal year 2012 budget request but was not included in the 2010 QHSR report. DHS established seven study groups for the QHSR, which were composed of officials from across DHS offices and components. The study groups were each led by a DHS official and facilitated by an independent subject matter expert from the Homeland Security Studies and Analysis Institute. These study groups conducted their analysis over a 5-month period and shared their work products, such as outlines of missions and assumptions, with other stakeholder groups in order to develop goals and objectives for each mission. At the end of the study group period, DHS senior leadership, including the Deputy Secretary of Homeland Security, the General Counsel, and office and component heads, met multiple times to review and discuss the study group recommendations. The DHS Office of Policy consolidated the study groups’ recommendations into a draft QHSR report and obtained and incorporated feedback on the draft report from other federal agencies and stakeholder groups, including the stakeholders listed in the 9/11 Commission Act, with which DHS was to consult in conducting the QHSR. Agreement on the QHSR report’s final content was reached between the Secretary for Homeland Security and senior White House officials. DHS issued the final QHSR report in February 2010. DHS initiated the BUR in November 2009. Each DHS directorate, component, and office created an inventory of its activities and categorized them according to the QHSR missions. For example, U.S. Immigration and Customs Enforcement (ICE) identified one of its activities as investigating human smuggling and trafficking, which it categorized under the Securing and Managing our Borders QHSR mission. The BUR resulted in a catalog of about 1,300 DHS activities organized under each of the five QHSR missions or categorized as mission or business support activities. DHS identified over 300 potential initiatives for increasing mission performance and accountability and improving department management, derived 43 priority initiatives from this list, and highlighted them in the July 2010 BUR report. For example, under the Enforcing and Administering our Immigration Laws mission, DHS identified as priority initiatives improving DHS’s immigration services processes and dismantling human-smuggling organizations (see fig. 2). In addition, DHS categorized its 43 BUR initiatives according to whether they require organizational, programmatic, policy, or legislative activities in order to be implemented. DHS defines these categories as (1) organizational, where implementation requires some type of departmental reorganization (e.g., create a cybersecurity and infrastructure resilience operational component within DHS); (2) programmatic, where implementation requires budgetary activity, such as a funding increase (e.g., increase efforts to detect and counter nuclear and biological weapons and dangerous materials); (3) policy, where implementation requires a policy decision but no additional funding (e.g., enhance the department’s risk management capability); and (4) legislative, where implementation requires a change in legislation or congressional approval because DHS does not have the legislative authority to implement the initiative (e.g., restore the Secretary’s reorganization authority for DHS headquarters). According to DHS officials, some BUR initiatives require one or more of these types of changes to be implemented, such as the initiative to strengthen internal DHS counterintelligence capabilities, which requires policy and programmatic changes. DHS’s fiscal year 2012 budget request highlighted funding requests to support projects and programs within each QHSR mission. For example, for QHSR mission 1, Preventing Terrorism and Enhancing Security, DHS’s fiscal year 2012 budget request includes requests for 18 projects and programs to support that mission. These requests include items such as $273 million for explosives detection systems at airports and $12.4 million for enhanced watchlist vetting of airline passengers. According to DHS officials, DHS intends to include longer-term project and program funding plans for QHSR missions through annual iterations of its FYHSP. For example, the Fiscal Year 2012-2016 FYHSP contains initiatives and planned performance information aligned with the missions of the department. According to the 9/11 Commission Act, DHS is to report on the results of its QHSR every 4 years with the next report due by December 31, 2013. DHS plans to issue its next QHSR report in accordance with the act. DHS solicited input from various stakeholder groups in conducting the first QHSR. The 9/11 Commission Act required DHS to consult with seven federal agencies in conducting the QHSR—the Departments of Agriculture, the Treasury, Justice, State, Defense, and Health and Human Services and the Office of the Director of National Intelligence. DHS consulted with these agencies and also sought input from a range of other stakeholders, including its directorates, offices, and components; other federal agencies; and nonfederal governmental and nongovernmental entities and representatives, such as state and local governmental associations and individuals working in academia. In obtaining input from these stakeholders, DHS used a variety of mechanisms, such as multiagency working groups, solicitation of homeland security research papers, and a web-based forum, referred to as the National Dialogue, as shown in table 1. We obtained comments from 63 stakeholders who DHS consulted with through these mechanisms. The 63 stakeholders who responded to our request for comments on the QHSR process noted that DHS conducted outreach to them to solicit their views and provided opportunities for them to give input on the QHSR. For example, DHS stakeholders, including its directorates, offices, and components, reported participating in the QHSR process by, for example, helping develop strategic outcomes and measurable end states for the QHSR missions, assigning representatives to the various QHSR study groups, and helping to draft QHSR report language. Stakeholders from 21 federal agencies other than DHS and its components that responded to our request for comments noted that they provided input during the QHSR process by, among other things, having representatives attend QHSR meetings, participating in sub- interagency policy committee meetings, and commenting on draft versions of the QHSR report. Additionally, 6 nonfederal stakeholders reported to us that DHS consulted with them by, for example, sending a representative to association meetings, participating in conference calls to discuss the QHSR, and holding stakeholder briefings to discuss QHSR strategic goals, outcomes, and responsibilities. DHS, QHSR stakeholders, and other entities, specifically the QRAC and NAPA, that reviewed aspects of the QHSR identified various benefits from DHS’s consultation efforts throughout the QHSR. For example, the Deputy Assistant Secretary for Policy (Strategic Plans) stated that stakeholder position paper submissions obtained at the beginning of the QHSR process were beneficial in that study groups had stakeholder input at the outset of the work. The Deputy Assistant Secretary also stated that the National Dialogue was beneficial in that it gave DHS the ability to gauge reactions to proposals for including information in the QHSR in real time, as the National Dialogue represented a virtual discussion among stakeholders. Further, 33 respondents to our request for comments on the QHSR reported that one positive aspect of DHS’s consultations during the QHSR was the range of stakeholders DHS contacted. Two DHS stakeholders reported, for example, that DHS made extensive efforts to involve a wide range of stakeholders and that involvement of federal non-DHS agencies was beneficial in helping DHS obtain views on the QHSR outside of the department. One DHS stakeholder noted that the benefit of involving state, local, and private industry in the QHSR study group discussions via the National Dialogue was that the study groups were able to systematically consider viewpoints of the public during the course of developing the QHSR mission goals and objectives. The public perspectives offered different views than those provided by DHS and other federal stakeholders. Similarly, 2 federal stakeholders responded that the interagency meetings and the National Dialogue were positive ways in which DHS involved stakeholders during the QHSR, and that DHS’s consultations provided a mechanism for interagency collaboration to discuss QHSR goal and objective areas. Additionally, one QRAC member noted that DHS involved and coordinated well with federal agencies; reached out reasonably well to state, local and tribal organizations; included a large number of academics and other policy experts; and gave the American public an opportunity to comment through the National Dialogue. Moreover, in its report on the QHSR, the QRAC noted that while not privy to the details of all inputs received, the QHSR report represented a synthesis of stakeholder consultations that was designed to set forth a shared vision of homeland security in order to achieve unity of purpose across the homeland security enterprise. In addition, with regard to the National Dialogue, NAPA reported that by engaging stakeholders at all levels, DHS was able to incorporate ground-level expertise and specialized knowledge into the review. According to NAPA, by conducting a process accessible to all interested parties, the National Dialogue provided the opportunity to strengthen trust among stakeholders and create potential buy-in for later implementation of policies and priorities they helped to shape. DHS consulted with a range of stakeholders through various mechanisms, but DHS officials and stakeholders identified challenges that hindered DHS’s consultation efforts in conducting the QHSR. These challenges were (1) consultation time frames, (2) inclusion of nonfederal stakeholders, and (3) definition of stakeholders’ roles and responsibilities. According to DHS officials, the department consulted with stakeholders primarily over a 5-month period—from May through September 2009— during the QHSR process. In response to our request for comments on the QHSR process, 16 stakeholders noted concerns regarding the time frames they had for providing input into the QHSR or BUR. Nine DHS stakeholders, for example, responded that in their view, the limited time available for development of the QHSR did not allow DHS to have as broad and deep an engagement with stakeholders as DHS could have experienced if more time had been allotted to stakeholder consultations. DHS stakeholders also reported to us that DHS’s time frames for conducting the BUR were short and that the BUR process was hampered by an overly aggressive timeline for deliberation and decision making. Two of the study group facilitators who responded to our request for comments reported that in their view, stakeholders needed more time to review draft work products and hold more discussions. Three federal stakeholders suggested that the process be initiated earlier than it was for the first QHSR to provide more time for DHS to consider and resolve stakeholder comments, draft the report, and provide stakeholders with an opportunity to review the draft report. One of these federal stakeholders stated that more detail and other viewpoints would have been added to the QHSR if DHS had conducted outreach earlier in the QHSR process while another noted that it was difficult to keep up with the changes in the QHSR draft report and therefore to fully participate in providing comments. There were multiple drafts and no dialogue on how the comments were incorporated, according to this stakeholder. This federal stakeholder stated that more lead time in the provision of QHSR materials would have allowed for stakeholders to better consider the information and provide DHS with feedback. Two state and local associations responded that more lead time for the arrangement of meetings and a review of the complete QHSR report prior to its release would have been helpful. In addition, NAPA identified challenges associated with time frames for conducting aspects of the QHSR. Specifically, in its report on the National Dialogue, NAPA stated that the abbreviated turnaround time between phases of the National Dialogue—approximately 3 weeks on average— resulted in very constrained time periods for the study groups to fully review stakeholder feedback, incorporate it into the internal review process, and use it to develop content for subsequent phases. NAPA reported that for DHS to improve online stakeholder engagement it should build sufficient time for internal review and deliberations into its timetable for public engagement on the QHSR, and provide the public an opportunity to see that it is being heard in each QHSR phase. Thus, related to the National Dialogue, NAPA recommended that DHS build a timetable that allows ample time for internal deliberations that feed directly into external transparency. According to the Deputy Assistant Secretary for Policy (Strategic Plans) at DHS, addressing NAPA’s recommendations, in general, is part of the QHSR project planning to begin during summer 2011 for the next QHSR. The official stated that DHS is considering NAPA’s recommendations and is looking for opportunities for additional stakeholder involvement during the next QHSR. DHS identified those stakeholders to be consulted and various consultation mechanisms to be used prior to initiation of stakeholder consultations, but planned the consultation time periods based on the limited time available between when the QHSR process began and when the report was due, contributing to the time frame concerns raised by the 16 QHSR stakeholders and NAPA. Our prior work on strategic studies has shown that when federal agencies are defining missions and outcomes, such as DHS did in developing the QHSR report, involving stakeholders is a key practice. According to program management standards, stakeholder and program time management are recognized practices, among others, for operating programs successfully. Stakeholder management defines stakeholders as those whose interests may be affected by the program outcomes and that play a critical role in the success of any program; it should ensure an active exchange of accurate, consistent, and timely information that reaches all relevant stakeholders. Time management is necessary for program components and entities to keep the overall program on track, within defined constraints, and produce a final product. According to the Deputy Assistant Secretary for Policy (Strategic Plans) at DHS, constrained time periods for stakeholder consultations are part of the challenge of executing a time-limited process with a broad stakeholder base, such as the QHSR. According to the Deputy Assistant Secretary, longer time periods for stakeholder consultations could be beneficial, but a tradeoff to consider is that the review as a whole would be more time consuming. DHS officials determined time periods for consultation by planning from the QHSR issuance date and then building in stakeholder consultation periods for white paper solicitation and receipt, the National Dialogue, and executive committee meetings. Stakeholder consultation time frames were built into the QHSR project plan, with planned time periods such as 23 days between white paper solicitation notifications and the deadline for submissions from stakeholders, which was dictated by the necessities of the December 31, 2009 issuance deadline. The National Security Staff set timelines for report review by other federal agencies, according to the Deputy Assistant Secretary. Moreover, this official said that setting target time frames for stakeholder consultations during the next QHSR is something that DHS plans to address during project planning. By considering ways to build more time for stakeholder consultations into the timeline or target time frames for the next QHSR, DHS could be better positioned to manage stakeholder consultations and feedback received throughout the process, including determining and communicating how much time stakeholders will be given for providing feedback and commenting on draft products. In addition, DHS could be better positioned to ensure that stakeholders have the time needed for reviewing QHSR documents and providing input. DHS consulted with a range of stakeholders, including federal and nonfederal entities, during the QHSR, and these consultations provided DHS with a variety of perspectives for consideration as part of the QHSR process. However, the department faced challenges in obtaining feedback from nonfederal stakeholders. Our prior work on key practices for performance management has shown that stakeholder involvement is important to help agencies ensure that their efforts and resources target the highest priorities. Involving stakeholders in strategic planning efforts can also help create a basic understanding among the stakeholders as to the agency’s programs and results they are intended to achieve. Without this understanding successful implementation can be difficult because nonfederal stakeholders help clarify DHS’s missions, reach agreement on DHS’s goals, and balance the needs of other nonfederal stakeholders who at times may have differing or even competing goals. As we have previously reported, nonfederal entities have significant roles in homeland security efforts. For example, state, local and private sector entities own large portions of critical infrastructure in the United States and have responsibilities for responding to and recovering from homeland security incidents. Thus we have previously reported that it is vital that the public and private sectors work together to protect these assets. Further, we have reported on the need for federal and nonfederal entities to more effectively communicate their emergency preparedness and response roles, responsibilities, and activities. For example, we have reported that effective public warning depends on the expertise, efforts, and cooperation of diverse stakeholders, such as state and local emergency managers and the telecommunications industry. In responding to our request for comments, 9 stakeholders commented that DHS consultations with nonfederal stakeholders, such as state, local, and private sector representatives, could be enhanced. For example, 1 stakeholder noted that state, local and private sector representatives, such as those with responsibility for securing critical infrastructure and key resources, the maritime sector, and overseas interests, should be further consulted during the next QHSR process. One federal stakeholder noted that state and local involvement is critical for homeland security and that a review of state and local readiness would be beneficial to determine the gaps that would need to be filled at the federal level. DHS could map out what state and local officials need in case of an emergency and include the various federal agencies in these discussions. Further, another stakeholder noted that DHS faced challenges in consulting specifically with the private sector during the 2010 QHSR. DHS consulted with private sector entities primarily through (1) the QRAC, whose membership was comprised of individuals from academia, nonprofit research organizations, private consultants, and nonprofit service providers and advocacy organizations; and (2) the National Dialogue. With regard to the QRAC, it met nine times during which it received information from DHS leadership regarding the QHSR design, analysis, and interim conclusions, and its members provided feedback and advice to DHS. However, one QRAC respondent noted that the council’s members were predominately consultants and not representatives of industries affected by homeland security threats, such as critical infrastructure sectors, which resulted in views that were not representative of one of the most affected members of the nonfederal homeland security community. This respondent stated that enhancing participation of private sector representatives is important for the next QHSR, as it would help DHS obtain views from entities that provide homeland security and emergency responses services, such as one corporation providing water to victims after Hurricane Katrina. According to this stakeholder, private sector entities could help offer DHS views on, for example, best practices for how to prepare for and respond to homeland security events or technology enhancements for homeland security. With regard to the National Dialogue—one of the primary mechanisms used for soliciting input from nonfederal stakeholders—17 stakeholders who responded to our request for comments on the QHSR, as well as NAPA, identified challenges. As an example of comments we received from these 17 stakeholders, 1 federal stakeholder reported that the National Dialogue did not appear to have significant impact on the QHSR because in interagency meetings involving this stakeholder, information from the National Dialogue was not discussed. In an additional example, one QRAC member responded that the National Dialogue included a small number of comments from the private sector and did not reflect the significant number of stakeholders around the country with homeland security responsibilities. This respondent stated that the National Dialogue was an important exercise but was not an effective means for obtaining representative views specifically of the private sector. Further, as another example, one state and local association responded that in its view, DHS’s conclusions on QHSR strategy had been reached prior to initiation of the National Dialogue, making it appear to the association that although DHS was soliciting its input, the department did not view the association as playing a consultative role in the QHSR development. In addition, NAPA reported that engaging nonfederal associations, such as the National Association of Counties, did not necessarily equate to reaching out to individual nonfederal entities, such as cities and counties. Therefore, according to NAPA’s report, through the National Dialogue, DHS notified approximately 1,000 contact members of nonfederal associations in an effort to include a range of nonfederal homeland security practitioners. Based on this outreach effort, NAPA’s report recommended continuing efforts to gain significant buy-in from nonfederal associations to ensure that DHS obtains access to the nonfederal stakeholders it wishes to consult regarding the QHSR. DHS faced challenges in obtaining nonfederal input during the QHSR process for two reasons. First, convening state and local government officials for consultation, especially from individual nonfederal stakeholders, on the QHSR was a significant logistical challenge, according to DHS officials. Because of this challenge, DHS opted to consult with national associations that could represent the perspectives of state, local, and tribal homeland security stakeholders. Second, according to the Deputy Assistant Secretary for Policy (Strategic Plans) at DHS, the Federal Advisory Committee Act (FACA), which establishes standards and uniform procedures for the establishment, operation, administration, and duration of advisory committees, affected how DHS was able to consult with private sector stakeholders when developing the QHSR report. Specifically, the Deputy Assistant Secretary noted that the department was limited in its ability to consult with private sector groups on an ongoing basis without forming additional FACA committees specifically for conducting consultations on the QHSR. DHS was also limited in its ability to seek feedback from established FACA committees that had been convened for other purposes. The meeting schedules of those committees did not align well with the QHSR study period, and there were significant logistical challenges to scheduling additional meetings of those groups to address QHSR. In addition, the Deputy Assistant Secretary for Policy (Strategic Plans) stated that under FACA DHS could not invite members of established FACA committees convened for other purposes to join meetings of the QRAC for the purpose of providing advice and feedback. One study group facilitator commented that the FACA consideration significantly reduced the role that nonfederal stakeholders played in the QHSR. According to this respondent, addressing the FACA requirements and including appropriate FACA-compliant groups with a broader range of academics and others could have affected the outcome of the study group’s deliberations. However, according to the Deputy Assistant Secretary, establishing new FACA committees in addition to the QRAC, which DHS established as a FACA-compliant committee specifically for QHSR consultations, was prohibitively time consuming within the time frames DHS had for conducting the 2010 QHSR. Four respondents to our request for comments made suggestions for alternative approaches for obtaining viewpoints of nonfederal stakeholders in future QHSRs. For example, one study group facilitator stated that state and local associations could put together a group of their members to engage in the QHSR process and be part of the study groups. In addition, the National Dialogue could have provided more focused questions and provided to a broad group of state and local experts questions on specific issues, such as housing disaster resiliency. This approach could have allowed more state, local, private sector, academic and nongovernmental organizations into the QHSR process, according to the facilitator. DHS officials noted, though, that the National Dialogue was not intended to address individual initiatives, as the QHSR was intended to focus on broader homeland security issues. Further, one local government association suggested that this association could put together a crosscutting group of local officials who could discuss specific issues, such as national preparedness. In addition to alternative approaches for obtaining viewpoints of nonfederal stakeholders provided by respondents outside of DHS, one DHS stakeholder responded they held in-person or teleconferencing meetings with numerous interest groups and associations, while another DHS stakeholder responded that the component sent emails to its stakeholder groups soliciting the groups’ views on the QHSR. Additionally, in our prior work on the Federal Emergency Management Agency’s (FEMA) process for updating the National Response Framework, we identified examples of ways in which FEMA involved nonfederal stakeholders in the process. For example, FEMA posted a spreadsheet that included the comments made by nonfederal stakeholders and the final disposition DHS assigned to each of those comments to allow stakeholders to see how DHS did or did not incorporate their comments. Further, FEMA had agency leaders appoint advisory council members who represented a geographic and substantive cross section of officials from the nonfederal community. Given the significant role that state and local governments and the private sector play in homeland security efforts, which is acknowledged by DHS in the QHSR report, examining mechanisms, such as those proposed by QHSR stakeholders or used by components, could help DHS include a broader segment of these representatives during the QHSR process and better position DHS to consider and incorporate, as appropriate, nonfederal concerns and capabilities related to homeland security in the next QHSR. DHS identified stakeholders’ roles and responsibilities in the QHSR report primarily by referencing other homeland security-related documents, such as the National Response Framework and National Infrastructure Protection Plan, that describe homeland security roles and responsibilities. With regard to federal agencies, the QHSR report described homeland security roles and responsibilities with brief summaries of federal agencies’ leadership roles for coordinating homeland security–related efforts. For example, the QHSR report listed the Attorney General’s responsibilities as conducting criminal investigations of terrorist acts or threats by individuals or groups, collecting intelligence on terrorist activity within the United States, and leading the Federal Bureau of Investigation, the Drug Enforcement Administration, and the Bureau of Alcohol, Tobacco, Firearms and Explosives in their respective areas of homeland security responsibilities. With regard to nonfederal stakeholders’ roles and responsibilities, the QHSR report provided summaries of roles and responsibilities, based on these and other homeland security–related documents, such as identifying that critical infrastructure owners and operators are responsible for developing protective programs and measures to ensure that systems and assets are secure from and resilient to threats. Our prior work has shown that agencies that work together to define and agree on their respective roles and responsibilities when implementing federal strategies that cross agency boundaries can enhance the effectiveness of interagency collaboration. In doing so, agencies clarify who will do what, organize their joint and individual efforts, and facilitate decision making. Further, our work on key characteristics for effective national strategies identified, among others, one desirable characteristic as defining the roles and responsibilities of the specific federal departments, agencies, or offices involved and, where appropriate, the different sectors, such as state, local, private, or international sectors. Inclusion of stakeholders’ roles and responsibilities in a strategy that crosses agency boundaries is useful to agencies and other stakeholders in clarifying specific roles, particularly where there is overlap, and thus enhancing both implementation and accountability. In addition, we have reported that DHS needs to form effective and sustained partnerships with a range of other entities, including other federal agencies, state and local governments, and the private and nonprofit sectors. Successful partnering involves collaborating and consulting with stakeholders to develop and agree on goals, strategies, and roles to achieve a common purpose. In responding to our request for comments on the QHSR, 10 federal stakeholders noted that the roles and responsibilities listed in the QHSR report, as derived from other documents, such as the National Response Framework and National Infrastructure Protection Plan, reflected their homeland security missions and activities. For example, 1 federal stakeholder responded that the roles and responsibilities listed in the QHSR report were established in previous documents and were accurate, and another federal stakeholder noted that the roles and responsibilities listed in the QHSR report were derived from previously published material for which the stakeholder had provided input. However, DHS and 10 other respondents to our request for comments noted that the department could strengthen its definition of homeland security roles and responsibilities in the next QHSR by better reflecting the range of the stakeholders’ roles and responsibilities. Specifically, in the QHSR report DHS identified the need to better assess stakeholders’ homeland security roles and responsibilities, noting that although the report was not intended to describe stakeholders’ roles and responsibilities, the division of operational roles and responsibilities among federal departments and agencies for various homeland security goals and objectives emerged as a major area requiring further study following the QHSR report. DHS reported that an analysis of roles and responsibilities across the homeland security missions would help resolve gaps or unnecessary redundancies between departments and agencies going forward. Further, 10 stakeholders commented to us that the definitions of roles and responsibilities in the QHSR report could be enhanced to better reflect the range of homeland security stakeholders’ responsibilities. For example, 3 federal stakeholders reported that roles and responsibilities definitions in the QHSR could be enhanced by, for example, recognizing the variety of agency or administration-level responsibilities of the cabinet departments. In particular, one of these federal stakeholders suggested that the next QHSR may want to include a more detailed delineation of the roles and responsibilities of departments to support the homeland security enterprise by (1) reflecting the broad nature of responsibilities across a broad spectrum of threats and (2) identifying readiness and resource requirements to address the stated roles and responsibilities. The brief narrative on roles and responsibilities in the 2010 QHSR report presented a shortened version of the roles and responsibilities that the federal stakeholder has in supporting the homeland security enterprise. Another federal stakeholder noted that formalizing the process to elicit values and judgments from individual agencies would help ensure adequate representation of each agency’s role in the next version of the QHSR report. The formalized process, according to the federal stakeholder, would convene agency officials and facilitate a discussion, resulting in a common understanding of how agency roles and responsibilities are defined for executing the QHSR strategy. In our December 2010 report on the extent to which the QHSR report addressed reporting elements that the 9/11 Commission Act specified for the report, we noted that DHS partially addressed two reporting elements for the QHSR report related to roles and responsibilities for homeland security stakeholders. These elements were for the QHSR report to include a discussion of the status of (1) cooperation among federal agencies in the effort to promote national security and (2) cooperation between the federal government and state, local, and tribal governments in preventing terrorist attacks and preparing for emergency response to threats to national homeland security. With regard to the first element, we reported that although the QHSR and BUR reports discussed homeland security roles and responsibilities for federal agencies, they did not discuss cooperation on homeland security efforts among federal agencies other than DHS. We reported that while the QHSR discussion of roles and responsibilities as found in other documents was helpful for understanding which federal agencies lead particular homeland security efforts, the QHSR report did not provide a description of how federal agencies cooperate with one another in addressing homeland security efforts. With regard to the second element, we reported that although the QHSR and BUR reports provided descriptions of cooperation between DHS and state, local, and tribal governments, they did not discuss the status of cooperation between other federal agencies that have homeland security responsibilities and state, local, and tribal governments. DHS officials stated that DHS solicited comments from other federal departments and state, local, and tribal governments on the role and responsibility descriptions for each of these entities listed in the QHSR report. According to the Deputy Assistant Secretary for Policy (Strategic Plans) at DHS, during the QHSR process the department did not attempt to discuss the status of cooperation among other federal departments and between other federal departments and state, local, and tribal governments. DHS officials stated that the department viewed such a discussion as outside its authority to conduct and that those discussions were conducted in other venues, such as the National Infrastructure Protection Plan and the National Response Framework. Because the National Response Framework and the National Infrastructure Protection Plan were completed during DHS’s launch of the QHSR, in 2008, use of those definitions in the QHSR was appropriate, according to the official. DHS did not obtain comments from all stakeholders on the definitions listed in the QHSR report, but looked at stakeholder comments on roles and responsibilities received during the National Infrastructure Protection Plan and National Response Framework drafting processes. The definitions listed in the QHSR report were also shared with DHS’s Office of Intergovernmental Affairs, which solicited comments from stakeholders, as necessary, based on any roles that may have changed since the National Infrastructure Protection Plan and National Response Framework were published, according to the official. In its May 2010 report on the QHSR, the QRAC noted that the QHSR report included a summary of roles and responsibilities of key stakeholders that was derived from existing statutes, among other documents. However, according to the QRAC report, the QHSR report did not provide a mapping of these roles and responsibilities to the QHSR missions and further work was required to deconflict and potentially supplement existing homeland security stakeholder role and responsibility policies and directives. According to the QRAC report, the QHSR report was designed to create a shared vision of homeland security in order to achieve unity of purpose; a vital next step was the delineation of key roles and responsibilities for individual QHSR goals and objectives to generate unity of effort. A comprehensive mapping of stakeholder roles and responsibilities to QHSR missions, goals, and objectives was needed to (1) enable assessment of the current state of cooperation and coordination between all public and private sector stakeholder communities; (2) identify potential gaps, conflicts, or both in current policies and directives from an enterprise perspective; and (3) underpin follow-on planning efforts. The QRAC recommended that DHS map goals to objectives for each core QHSR mission and key stakeholder communities to delineate the stakeholders’ respective roles and responsibilities. In response to this recommendation, DHS plans to map the existing QHSR mission goals and objectives to stakeholder roles and responsibilities during the pre-execution year for the next QHSR, if possible. This setup work on role and responsibility mapping would allow for work at the end of the QHSR process to map roles and responsibilities to final QHSR goals and objectives developed during the next QHSR. Consistent with the QRAC’s recommendation and DHS’s planned actions, by seeking to further define homeland security stakeholders’ roles and responsibilities in the next QHSR, DHS could be better positioned to identify, understand, and address any potential gaps in roles and responsibilities or areas for additional or enhanced cooperation and coordination. Through the QHSR, DHS identified various threats confronting homeland security but did not conduct a risk assessment for the QHSR. In the 2010 QHSR report, DHS identified six threats confronting homeland security, such as high-consequence weapons of mass destruction and illicit trafficking and related transnational crime, as well as five global challenges, including economic and financial instability and sophisticated and broadly available technology. According to the QHSR report, these threats and challenges were the backdrop against which DHS planned to pursue its homeland security efforts. The threats and global challenges listed in the QHSR report were developed through discussions with federal national security officials and through reviews of intelligence community materials, according to DHS officials. Multiple DHS guidance documents emphasize the importance of considering risk assessment information when engaging in strategic decisions. For example, DHS’s Integrated Risk Management Framework (IRMF), published in January 2009, calls for DHS to use risk assessments to inform DHS-wide decision-making processes. Risk assessments, which include assessing and analyzing risk in terms of threats, vulnerabilities, and consequences of a potential homeland security incident, are the foundation for developing alternative strategies for managing risk, according to the IRMF. Similarly, the QHSR report includes an objective for DHS to establish an approach for national-level homeland security risk assessments, specifically calling for development and implementation of a methodology to conduct national-level homeland security risk assessments. Our prior work on federal strategic studies has also found that establishing an analytic framework to assess risks is a key aspect of developing a strategy to address national problems, such as homeland security. Consistent with the IRMF, we define risk assessment as a qualitative determination, a quantitative determination, or both of the likelihood of an adverse event occurring and the severity, or impact, of its consequences. DHS has called for the use of national risk assessments for homeland security but did not conduct such an assessment as part of the 2010 QHSR. DHS officials stated that at the time DHS conducted the QHSR, DHS did not have a well-developed methodology or the analytical resources to complete a national risk assessment that would include likelihood and consequence assessments. The QHSR terms of reference, which established the QHSR process, also stated that at the time the QHSR was launched, the homeland security enterprise lacked a process and a methodology for consistently and defensibly assessing risk at a national level and using the results of such an assessment to drive strategic prioritization and resource decisions. In recognition of a need to develop a national risk assessment methodology, the QHSR National Risk Assessment Study Group was created as part of the QHSR process. In establishing the study group, the QHSR Terms of Reference stated that assessing national risk was a fundamental and critical element of an overall risk management process, with the ultimate goal of improving the ability of decision makers to make rational judgments about tradeoffs between courses of action to manage homeland security risks. The QHSR National Risk Assessment Study Group consulted with subject matter experts from the federal government, academia, and the private sector and, in October 2009, produced the Homeland Security National Risk Assessment (HSNRA) methodology, which established a process for conducting a national risk assessment in the future. According to DHS officials, because the HSNRA methodology was developed as part of the QHSR process and finalized as the QHSR report was being completed in late 2009, it was not intended to be implemented during the 2010 QHSR. The HSNRA is to use a methodology for assessing risk across a range of hazards for use by DHS in its decisions on strategy and policy development, planning priorities, resource allocation, and capability requirements development. The HSNRA includes definitions or descriptions of the scope of incidents it applies to, risk formula, and likelihood and consequence (see table 2). Outputs from the HSNRA calculations could be expressed in a number of ways, such as plotting scenarios on a two-dimensional graph with scenario frequency estimates on the x axis and scenario consequence estimates on the y axis, as shown in figure 3. In accordance with the QHSR goal of implementing a national risk assessment and with issuance of Presidential Policy Directive 8, which calls for risk analysis across a range of homeland security threats, DHS is planning to conduct a national risk assessment as part of its next QHSR. In determining how to conduct a national risk assessment, DHS is considering various factors, such as how to incorporate and use an assessment’s results, the time frames and costs for conducting an assessment, and what alternatives exist to conducting a national assessment.  Use of national risk assessment results. DHS officials stated that one consideration in determining how to conduct a national risk assessment is the manner in which the department would use the results of such an assessment to inform the QHSR. Specifically, DHS officials told us that a national risk assessment, such as the HSNRA, should be one of multiple inputs considered in conducting the QHSR, with other inputs including such factors as privacy and civil liberties concerns, economic interests, and administration priorities. DHS’s risk assessment guidance makes a similar point, stating that risk information is usually one of multiple factors decision makers consider and is not necessarily the sole factor influencing a decision. There may be times when the strategy selected and implemented does not optimally reduce risk, and decision makers should consider all factors when selecting and implementing strategies. The National Research Council of the National Academies also reported that risk analysis is one input to decision making, designed to inform decisions but not dictate outcomes. DHS officials noted that it would be important to communicate this to its stakeholders, including Congress, the public, and others, to manage any expectations that QHSR decisions would be solely based on risk assessment results.  National risk assessment time frames and costs. According to DHS officials, the HSNRA would require 12 months to complete and would need to be completed before launching the next QHSR, since the assessment would help frame how the QHSR missions are defined. If the next QHSR is conducted during fiscal year 2013 and reported by December 2013, as anticipated by DHS, the HSNRA would need to be completed during fiscal year 2012 to help inform the QHSR, according to DHS officials. With regard to financial costs, DHS officials estimated that conducting the HSNRA over a 12-month period would cost from $3 million to $6 million. DHS’s Deputy Assistant Secretary for Policy (Strategic Plans) stated that the HSNRA is a sound methodology that should be used as part of the next QHSR, and officials within DHS’s unit responsible for developing the HSNRA, the Office of Risk Management and Analysis, stated that the benefits of having risk information available for input into developing the QHSR are worth the costs.  National risk assessment alternatives. In order to identify risks and inform mission areas for the next QHSR, DHS could consider alternatives to conducting a national risk assessment, according to DHS officials. These officials stated that one alternative approach would involve using segments of the HSNRA process to help provide risk information to department decision makers, such as eliciting expert judgments and surveying nonfederal experts about perspectives on the risks DHS should address. The officials stated that this approach would not be as useful as a complete HSNRA because a full HSNRA provides likelihood and consequence estimates for various homeland security incident scenarios, which offers a more complete picture of the risks DHS must address. Another approach, according to the officials, would be to identify risks through existing DHS analyses, such as the Homeland Security Threat Assessment or the National Planning Scenarios. The officials stated that identification of risks through these tools would also be limited and would not be as effective as completing the HSNRA. For example, the HSNRA includes likelihood estimates for scenarios, which these other tools do not include, and therefore provides a more complete picture of risk by addressing threats, likelihoods, and consequences. Consistent with DHS’s plans, a national risk assessment conducted in advance of the next QHSR could assist DHS in developing QHSR missions that target homeland security risks and could allow DHS to demonstrate how it is reducing risk across multiple hazards. DHS considered various factors in identifying high-priority BUR initiatives for implementation in fiscal year 2012 but did not include risk information as one of these factors. Through the BUR, DHS identified 43 initiatives aligned with the QHSR mission areas to help strengthen DHS’s activities and serve as mechanisms for implementing those mission areas. According to DHS officials, the department could not implement all of these initiatives in fiscal year 2012 because of, among other things, resource constraints and organizational or legislative changes that would need to be made to implement some of the initiatives. In identifying which BUR initiatives to prioritize for implementation in fiscal year 2012, DHS leadership considered (1) “importance,” that is, how soon the initiative needed to be implemented; (2) “maturity,” that is, how soon the initiative could be implemented; and (3) “priority,” that is, whether the initiative enhanced secretarial or presidential priorities. Component leadership officials, as subject matter experts, completed a survey instrument indicating their assessment of each BUR initiative based on these criteria. The results were then aggregated and presented to DHS’s Program Review Board—which is the body that oversees DHS program reviews and the budgeting process. With the Deputy Secretary’s leadership, the Program Review Board evaluated the results of the survey and refined the prioritization. The BUR initiative prioritization process resulted in the Secretary and Deputy Secretary of Homeland Security ranking and selecting 14 high-priority BUR initiatives to be implemented in fiscal year 2012, as shown in table 3. We and DHS have called for the use of risk information in making prioritization, resource, and investment decisions. For example, DHS’s IRMF states that DHS is to use risk information to inform strategies, processes, and decisions to enhance security and to work in a unified manner to manage risks to the nation’s homeland security. The IRMF states that one of its objectives is to use an integrated risk management process to inform resource allocations on a departmentwide basis, which is critical to balance resources across the set of DHS strategic objectives. Likewise, our prior work has shown the importance of using risk information to inform resource prioritization decisions. For example, our risk management approach advises using risk information to inform resource allocation decisions so that management can consider which risks should be managed immediately and which risks can be deferred and addressed at a later time. According to DHS officials, using risk information as an input into DHS’s prioritization of the initiatives was difficult for several reasons. For example, the BUR initiatives were highly differentiated, making comparisons based on risks the initiatives address impossible, according to DHS officials. Some of the BUR initiatives focus on organizational changes at DHS; others are extremely broad, addressing multiple and overlapping risks; and others focus on specific risks. For example, comprehensive immigration reform is a broad BUR initiative, addressing broad illegal immigration risks, while promoting safeguards for access to secure areas in critical facilities targets more specific risks. According to the officials, the variance in how the initiatives were defined allowed DHS to align initiatives with the QHSR strategy and consideration of such variance, in addition to risks addressed by QHSR implementation mechanisms, such as BUR initiatives, would be important in defining implementation mechanisms and initiatives for future QHSRs. However, DHS could not apply its existing risk assessment tools to evaluating and prioritizing BUR initiatives for the 2010 QHSR. For future QHSRs, DHS officials described several characteristics of mechanisms for implementing QHSR missions that would enable risk information to be used among prioritization criteria. First, the implementation mechanisms or initiatives to be prioritized based on risk information should be comparable in terms of the nature of the risks addressed. For example, comparing mechanisms to address DHS organizational changes that do not directly reduce homeland security risks with mechanisms that are designed to directly prevent terrorism risks would be an inappropriate comparison. Second, expected outcomes of the mechanisms or initiatives should be defined so that the risks reduced by the mechanisms can be estimated. For example, the BUR initiatives do not indicate the degree to which investments will change DHS’s security capabilities. Knowing the increase (or decrease) in security capabilities associated with an implementation mechanism would allow estimates of risks reduced, which could be compared in prioritization efforts. Third, an implementation mechanism or initiatives should have a “line of sight” directly between the DHS activities associated with the mechanism and the risk reduced by those activities. In other words, according to the officials, DHS operations need to be closely aligned with identified risk reductions in order for risk reduction calculations to be accurately achieved. For example, U.S. Border Patrol efforts to stop illegal border crossings are closely aligned with reducing risks of illegal immigration. DHS officials stated that although existing DHS risk assessment tools could not be used to systematically prioritize the BUR initiatives for the 2010 QHSR, there is utility in thinking qualitatively about risks addressed by the initiatives when making future prioritization decisions. Risk information should not be the sole input but should be considered along with other criteria, according to Office of Risk Management and Analysis officials and the Deputy Assistant Secretary for Policy (Strategic Plans). DHS has various tools that could, with some limitations, provide risk information for consideration when prioritizing implementation of QHSR mission objectives, as shown in table 4. Two tools, the Risk Analysis Process for Informed Decision-making (RAPID) and the methodology for conducting the HSNRA, were created to provide risk information for decision making across DHS mission areas. Risk analyses conducted within DHS components could also provide risk information useful for prioritizing QHSR implementation mechanisms, according to DHS officials. The officials stated that at least five current risk assessments used by DHS components could be useful for prioritizing QHSR implementation efforts within the mission areas relevant to the risk assessment tools. DHS officials stated that there are benefits to considering risk information in resource allocation decisions; however, DHS has not yet examined the extent to which risk information could be used when implementing subsequent QHSRs. Consideration of risk information could help strengthen DHS’s prioritization of mechanisms for implementing the QHSR, including determining which initiatives or programs should be implemented in the short or longer term and the resources required for implementation. Such information could also help the department to more effectively make decisions about implementing initiatives and allocating resources across initiatives that address different levels and types of risks. DHS is managing and monitoring its implementation of BUR initiatives primarily through its budget development and execution process, called the Planning, Programming, Budgeting, and Execution (PPBE) process. The objective of the PPBE process is to articulate DHS’s goals, objectives, and priorities; align DHS programs with those goals; guide the development of the department’s budget request; and set guidelines for implementing the current budget. To manage implementation of the BUR initiatives, beginning with the fiscal year 2012 budget request, DHS officials told us that DHS developed implementation plans for each of the 43 BUR initiatives during the planning phase of the PPBE process (see fig. 4). DHS assigned a directorate, component, or office to lead departmentwide implementation efforts for each initiative, including developing the implementation plans. According to DHS, each implementation plan included what needs to be done to accomplish the BUR initiative, what is currently being done to address identified implementation problems, a description of stakeholders involved in the implementation effort, and a discussion of next steps. DHS initiative leads submitted BUR implementation plans to the department for review and discussion during the fiscal year 2012 budget development process and the fiscal years 2012-2016 budget review process, and these plans served as a basis for components to develop their Resource Allocation Plans (RAP)—components’ descriptions of funding needs for fiscal year 2012. All DHS components execute missions, spend resources, and report on performance. Key activities: DHS is to translate planning priorities into 5-year resource and performance plan (the Future Years Homeland Security Program), and allocate limited resources to best meet the prioritized needs. Key activities: DHS is to develop detailed budget estimates of approved resource plans for budget year justification and presentation, and work with the Office of Management and Budget and Congress to get a budget enacted. Key activities: DHS Chief Financial Officer monitors accountability and execution of budget authority and reports results and makes recommendations on realigning resources. Funds are apportioned to the directorates and components in accordance with apportionment guidelines. Outputs: Secretary’s Resource Allocation Decisions, DHS Future Years Homeland Security Program. Outputs: Budget request sent to the Office of Management and Budget and Congress’ briefing and information sent to Congress. Outputs: Monthly Budget Execution Reports, Midyear Review, congressionally directed reports, Annual Financial Report, and Annual Performance Report. DHS plans to implement the BUR initiatives primarily through components’ existing programs and activities. For example, DHS plans to implement the strengthen aviation security BUR initiative through its existing aviation security programs, such as checking airline passengers against watchlists and screening passengers at airports. Through the PPBE process for fiscal year 2012, DHS requested additional funding for select BUR initiatives, above base funding for programs and activities that support those initiatives. For example, under the Improve Detention and Removal Process BUR initiative, DHS requested about $222 million in increased funding to support its existing Secure Communities program and effort to rightsize detention bed space. In addition to increased funding requested for select BUR initiatives, according to DHS, the department planned to fund existing programs and activities that support the other BUR initiatives through its base funding. For example, the Domestic Nuclear Detection Office stated that it plans to fund its programs that support the BUR initiative to increase efforts to detect and counter nuclear and biological weapons and dangerous materials through its base funding. To monitor implementation of the BUR initiatives, DHS established scorecards as part of its Integrated Planning Guidance—which DHS developed during the planning phase of the fiscal year 2013 PPBE process to provide guidance to DHS components for the programming and budgeting phases. The scorecards depict the status of implementing BUR initiatives, including, among other things, whether DHS requested funding for BUR initiatives in fiscal year 2012 or plans to request funding in future years. The scorecards also allow DHS to periodically assess progress made on implementing individual BUR initiatives and the status of BUR implementation as a whole. For those BUR initiatives for which the department did not identify specific funding needs in future years, DHS officials told us that they have discussions with DHS components and directorates during midyear budget review meetings to discuss progress made toward implementing BUR initiatives. In addition, DHS officials told us that because the BUR initiatives reflect existing DHS priorities, the initiatives are monitored through the Secretary of Homeland Security’s discussions with component and directorate leadership, such as discussions on progress being made on a particular BUR initiative like strengthening aviation security. DHS has taken action to develop and strengthen its performance measures, including linking them to QHSR missions and goals and ensuring limited overlap among measures. While DHS has not developed performance measures for all QHSR missions, goals, and objectives, DHS has efforts under way to develop measures to address those missions, goals, and objectives. Our prior work on key practices for performance measurement has shown that measuring performance allows organizations to track the progress they are making toward their goals and gives managers critical information on which to base decisions for improving their performance. We also have previously reported on attributes of successful performance measures that include ensuring that measures are linked to agencies’ missions and goals. Since issuance of the QHSR report, DHS has undertaken efforts to develop new performance measures and link its existing measures to the QHSR missions and goals. These efforts included DHS providing guidance to components that outlines how to assess QHSR missions, goals, and objectives and achievement of QHSR outcomes. DHS also provided components with performance measure development training and formed working groups to discuss performance measurement best practices. To support these efforts, in 2010, we provided technical assistance to DHS and its components as they developed and revised their performance measures to align with the strategic missions and goals of the QHSR. Our feedback ranged from pointing out components’ limited use of outcome-oriented performance measures to assess the results or effectiveness of programs to raising questions about the steps taken by DHS or its components to ensure the reliability and verification of performance data. While we offered advice on best practices for performance measurement and developing outcome-oriented measures, we did not suggest specific performance measures or targets or recommend methodologies for collecting, analyzing, and reporting performance measure data. Therefore, there was no expectation that we and DHS reached agreement on the performance measures, and thus decisions related to performance measures were fundamentally an executive branch management responsibility. In response to this feedback and its internal review efforts, DHS took action to develop and revise its performance goals and measures to strengthen its ability to assess its outcomes and progress in key mission areas. In DHS’s fiscal years 2010-2012 Annual Performance Report, DHS identified 57 new performance measures for fiscal year 2011, retained 28 measures from the fiscal year 2010 measure set, and is in the process of refining the methodologies for additional measures that the department plans to implement in fiscal year 2012. DHS’s actions to strengthen its performance measures have helped the department link its measures to QHSR missions, goals, and objectives. DHS has not yet developed performance measures for all of the QHSR goals and objectives but has plans to do so. Specifically, DHS has established new performance measures, or linked existing measures, to 13 of 14 QHSR goals, and to 3 of the 4 goals for the sixth category of DHS activities—Providing Essential Support to National and Economic Security. DHS reported these measures in its fiscal years 2010-2012 Annual Performance Report. At the time of issuance of that report, DHS had not yet developed performance measures for QHSR Goal 2.3, Disrupt and Dismantle Transnational Criminal Organizations, or one of the goals for its sixth category of activities—Provide Specialized National Defense Capabilities. However, since then, DHS officials told us that the department has developed performance measures for these goals and plans to publish them in its budget justification to Congress upon approval of the measures by DHS leadership and the Office of Management and Budget. Further, within QHSR Goal 4.2, Promote Cybersecurity Knowledge and Innovation, DHS has not yet developed measures for two of the three objectives—foster a dynamic workforce and invest in innovative technologies, techniques, and procedures. DHS officials told us that the department is collaborating with the Office of Personnel Management on a multiyear effort to identify competencies and more accurately gauge workforce needs for cybersecurity professionals and is working to develop a measure related to innovative technologies that have been developed and deployed. Homeland security includes a vast range of mission areas—from preventing terrorism to securing U.S. borders, safeguarding cyberspace, and ensuring resilience to disasters. It also involves a wide variety of stakeholders and partners, including federal departments and agencies; state, local, and tribal governments; and nongovernmental entities, including the private sector. Given the scope and magnitude of the homeland security enterprise, it is important for the federal government to set clear goals, objectives, and priorities for securing the United States and making resource allocation decisions. DHS’s 2010 QHSR—the department’s first quadrennial review—was a massive undertaking to review the nation’s homeland security strategy and identify homeland security missions and organizational objectives. It involved the input of numerous stakeholders with homeland security roles and responsibilities, including other federal agencies, state and local government entities, and academics. DHS plans to initiate its next QHSR in fiscal year 2013 and to report on that review’s results in fiscal year 2014. In conducting this next review, DHS could leverage lessons learned from the 2010 QHSR to strengthen its planning and risk management efforts. Specifically, given the array of federal and nonfederal stakeholders involved in implementing homeland security missions, building more time for obtaining stakeholders’ feedback and input and examining additional mechanisms to obtain nonfederal stakeholders’ input could strengthen DHS’s planning and management of stakeholder consultations and better position it to obtain, review, and incorporate, as appropriate, stakeholders’ feedback. Risk assessment in the homeland security realm is an evolving field, although DHS has developed methodologies, human capital, and departmental policies for integrating risk information into DHS decision- making processes. Such information can help decision makers identify and assess homeland security threats and vulnerabilities facing the nation and evaluate strategies for mitigating or addressing those threats and vulnerabilities. Using existing risk assessment tools could assist DHS in prioritizing QHSR implementation mechanisms. Specifically, examining the extent to which risk information could be used to help prioritize implementation mechanisms for the next QHSR could help DHS determine how to incorporate and use such information to strengthen prioritization and resource allocation decisions. To strengthen DHS’s planning, management, and execution of the next QHSR, we recommend that the DHS Assistant Secretary for Policy take the following three actions:  Provide more time for consulting with stakeholders during the QHSR process to help ensure that stakeholders are provided the time needed to review QHSR documents and provide input into the review, and build this time into the department’s project planning for the next QHSR.  Examine additional mechanisms for obtaining input from nonfederal stakeholders during the QHSR process, such as whether panels of state, local, and tribal government officials or components’ existing advisory or other groups could be useful, and use them for obtaining nonfederal stakeholders’ input, as appropriate, during the next QHSR.  Examine the extent to which risk information could be used as one input to prioritize QHSR implementing mechanisms, including reviewing the extent to which the mechanisms could include characteristics, such as defined outcomes, to allow for comparisons of the risks addressed by each mechanism. To the extent that DHS determines that risk information could be used, consider such information as one input into the decision-making process for prioritizing the QHSR implementation mechanisms. We requested comments on a draft of this report from DHS. On September 12, 2011, DHS provided written comments, which are reprinted in appendix III. DHS concurred with our three recommendations and described actions planned to address them. With regard to our first recommendation that DHS provide more time for consulting with stakeholders during the QHSR process and to build this time into the department’s project planning for the next QHSR, DHS stated that it would endeavor to incorporate increased opportunities and time for stakeholder engagement during the next QHSR. Regarding our second recommendation that DHS examine additional mechanisms for obtaining input from nonfederal stakeholders during the QHSR process and use them for obtaining nonfederal stakeholders’ input, DHS stated that it will examine using panels of state, local, and tribal government officials and existing advisory groups to obtain input. With regard to our third recommendation that DHS examine the extent to which risk information could be used as one input into prioritizing QHSR implementing mechanisms and to consider such information, if appropriate, when prioritizing QHSR implementation, DHS stated that it intends to conduct risk analysis specific to the QHSR in advance of the next review. DHS stated that it plans to consider the results of such analysis, along with other factors, as an input into decision making related to QHSR implementation. DHS also provided technical comments, which we incorporated as appropriate. We also requested comments on a draft of this report from the Departments of Agriculture, Defense, Health and Human Services, State, the Treasury, and Justice and the Office of the Director of National Intelligence. The Department of Defense provided technical comments which we incorporated as appropriate. In e-mails received from departmental liaisons on September 7, 2011, the Departments of Agriculture, State, and Justice indicated that they had no comments on the report. In e-mails received on September 7, 2011, from the Department of the Treasury’s Director for Emergency Programs and the Department of Health and Human Service’s Office of the Assistant Secretary for Legislation, both departments indicated that they had no comments on the report. In an e-mail received on September 9, 2011, from a departmental liaison, the Office of the Director of National Intelligence indicated that it had no comments on the report. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 20 days from the report date. At that time, we will send copies to the Secretaries of Agriculture, Defense, Health and Human Services, Homeland Security, State, and the Treasury; the Attorney General; the Director of National Intelligence; and selected congressional committees. The report also will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512- 9627 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. The Department of Homeland Security’s (DHS) strategic documents, such as component strategic plans and budget requests, align with the Quadrennial Homeland Security Review (QHSR) missions. The May 2010 National Security Strategy (NSS) also identifies strategic elements related to homeland security that are identified in the QHSR report, such as similar listings of homeland security threats. Each of the DHS strategic documents we reviewed includes language explicitly aligning at least some aspects of the strategy with the QHSR report, as shown in table 5. According to DHS officials, DHS does not have an explicit policy that strategic documents, such as component strategic plans, be consistent with the missions, goals, and objectives listed in the QHSR report. However, such consistency is expected by DHS senior management, according to the officials. We also identified 17 references to homeland security within the NSS that relate to DHS responsibilities. While not explicitly linked to the QHSR report in the NSS document, each of the 17 statements link to aspects of the QHSR report, such as homeland security threats identified or specific QHSR goals or objectives. The objectives for this report were to evaluate the extent to which the Department of Homeland Security (DHS) (1) consulted with stakeholders in developing the Quadrennial Homeland Security Review (QHSR) strategy; (2) conducted a national risk assessment to develop the QHSR; and (3) developed priorities, plans, monitoring mechanisms, and performance measures for implementing the QHSR and Bottom-Up Review (BUR) initiatives. To address our objectives, we analyzed DHS documents related to the QHSR, BUR, and budget development processes, including the QHSR report, BUR report, fiscal year 2012 budget request, and Fiscal Years 2012-2016 Future Years Homeland Security Program. We identified criteria for evaluating these processes by analyzing our prior reports on key characteristics of effective national strategies, key practices for effective interagency collaboration, strategic planning, performance measurement, and standards for internal control, among others. For a listing of these prior reports, see the related products listed at the end of this report. Based on these reports, we identified those key practices and characteristics applicable to quadrennial reviews, like the QHSR. The key practices we identified were involving stakeholders in defining QHSR missions and outcomes; defining homeland security problems and assessing risks; including homeland security strategy goals, subordinate objectives, activities, and performance measures; including resources, investments, and risk management; including organizational roles, responsibilities, and coordination across the homeland security enterprise; and establishing a DHS process for managing implementation of BUR initiatives. We vetted the key practices with our subject matter experts—staff with legal and methodological expertise and experience analyzing the Quadrennial Defense Review—and provided them to DHS officials for review and incorporated their comments as appropriate. As we developed our report, we grouped these key practices into three areas—stakeholder involvement, risk assessment, and implementation processes for the QHSR and BUR initiatives. Because respondents volunteered information about their views on the QHSR, we do not know the extent to which other officials within the same organizations shared these views. provided insights into stakeholder perspectives on how QHSR stakeholder consultations were conducted and how they could be improved. Further, we reviewed reports on the QHSR by the National Academy of Public Administration (NAPA) and the QRAC, both of which were based upon each organization’s collaboration experiences with DHS in developing the QHSR report. During the QHSR, NAPA partnered with DHS to conduct three National Dialogues, which allowed any member of the public to review draft QHSR material and provide online suggestions for the QHSR. According to the QRAC’s report, the QRAC served as a forum in which committee members, who were nonfederal representatives, shared independent advice with DHS on the QHSR process. We compared DHS’s stakeholder consultation efforts to our prior work on effective practices for collaboration and consultation. For example, based on a key practice in federal agency collaboration, we analyzed the extent to which DHS worked with stakeholders to establish agency roles and responsibilities when developing the QHSR. To determine the extent to which DHS conducted a national risk assessment to develop the QHSR, we analyzed risk analysis–related documents produced as part of the QHSR process, such as DHS risk assessment tools, and interviewed DHS officials responsible for developing risk analyses for use at DHS. We compared DHS’s risk assessment process in the QHSR to our prior work on key characteristics for risk assessment as well as DHS risk analysis guidance documents. For example, we reviewed our previous reports on key practices in risk management, including risk assessment approaches, and compared them to DHS’s effort to develop a national risk assessment methodology. In addition, we reviewed DHS guidance for use of risk assessment information and compared the guidance with DHS’s QHSR risk assessment process. To determine the extent to which DHS developed priorities, implementation plans, monitoring mechanisms, and performance measures, we analyzed DHS’s BUR implementation priorities and plans, such as DHS’s fiscal year 2012 budget request; monitoring mechanisms, such as BUR initiative scorecards; and DHS’s performance measures. We also interviewed DHS officials responsible for managing and monitoring the implementation of the BUR initiatives. We compared DHS’s processes for prioritizing, monitoring, and measuring implementation efforts to our prior work on key practices for risk management and implementation and monitoring of strategic initiatives. For example, we identified practices in our past reports and DHS guidance for using risk information in resource prioritization decisions and compared DHS’s efforts to prioritize and implement the QHSR strategy with those practices. We also compared DHS’s strategic-level performance measures for fiscal year 2011 to our criteria on key attributes of successful performance measures. Because DHS focused on aligning its performance measures with QHSR missions, we selected three key attributes of successful performance measures that were most relevant—linkage, core program activity, and limited overlap. In applying the attributes, we analyzed documentation, such as the QHSR report and DHS’s fiscal years 2010-2012 Annual Performance Report. We also interviewed DHS officials who are involved in overseeing the development and reporting of DHS performance measures. We conducted this performance audit from January 2011 through September 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Rebecca Gambler, Assistant Director, and Ben Atwater, Analyst-in-Charge, managed this assignment. Jean Orland and Janay Sam made significant contributions to this work. Michele Fejfar assisted with design and methodology, and Tracey King provided legal support. Labony Chakraborty, Jessica Orr, and Robert Robinson assisted with report preparation. Defense Transportation: Additional Information Is Needed for DOD’s Mobility Capabilities and Requirements Study 2016 to Fully Address All of Its Study Objectives. GAO-11-82R. Washington, D.C.: December 8, 2010. Department of Homeland Security: Actions Taken Toward Management Integration, but a Comprehensive Strategy Is Still Needed. GAO-10-131. Washington, D.C.: November 20, 2009. Transportation Security: Comprehensive Risk Assessments and Stronger Internal Controls Needed to Help Inform TSA Resource Allocation. GAO-09-492. Washington, D.C.: March 27, 2009. Quadrennial Defense Review: Future Reviews Could Benefit from Improved Department of Defense Analyses and Changes to Legislative Requirements. GAO-07-709. Washington, D.C.: September 14, 2007. Risk Management: Further Refinements Needed to Assess Risks and Prioritize Protective Measures at Ports and Other Critical Infrastructure. GAO-06-91. Washington, D.C.: December 15, 2005. Results-Oriented Government: Practices That Can Help Enhance and Sustain Collaboration among Federal Agencies. GAO-06-15. Washington, D.C.: October 21, 2005. Results-Oriented Government: Improvements to DHS’s Planning Process Would Enhance Usefulness and Accountability. GAO-05-300. Washington, D.C.: March 31, 2005. Combating Terrorism: Evaluation of Selected Characteristics in National Strategies Related to Terrorism. GAO-04-408T. Washington, D.C.: February 3, 2004. Tax Administration: IRS Needs to Further Refine Its Tax Filing Season Performance Measures. GAO-03-143. Washington, D.C.: November 22, 2002. Homeland Security: Proposal for Cabinet Agency Has Merit, But Implementation Will be Pivotal to Success. GAO-02-886T. Washington, D.C.: June 25, 2002. Standards for Internal Control in the Federal Government. GAO/AIMD-00-21.3.1. Washington, D.C.: November 1999. Executive Guide: Effectively Implementing the Government Performance and Results Act. GAO/GGD-96-118. Washington, D.C.: June 1996.
The United States continues to face a range of evolving threats, such as the 2010 attempted attack on the nation's air cargo system, that underscore why homeland security planning efforts are crucial to the security of the nation. The Implementing Recommendations of the 9/11 Commission Act of 2007 required the Department of Homeland Security (DHS) to provide a comprehensive examination of the U.S. homeland security strategy every 4 years. In response, DHS issued its first Quadrennial Homeland Security Review (QHSR) report in February 2010 and a Bottom-Up Review (BUR) report in July 2010, to identify initiatives to implement the QHSR. As requested, this report addresses the extent to which DHS (1) consulted with stakeholders in developing the QHSR, (2) conducted a national risk assessment, and (3) developed priorities, plans, monitoring mechanisms, and performance measures for implementing the QHSR and BUR initiatives. GAO analyzed relevant statutes and DHS documents on the QHSR and BUR processes and, in response to a request for comments on the processes, received comments from 63 of the 85 federal and nonfederal stakeholders it contacted. Their responses are not generalizable, but provided perspectives on the processes. DHS solicited input from various stakeholder groups in conducting the first QHSR, but DHS officials, stakeholders GAO contacted, and other reviewers of the QHSR noted concerns with time frames provided for stakeholder consultations and outreach to nonfederal stakeholders. DHS consulted with stakeholders--federal agencies; department and component officials; state, local, and tribal governments; the private sector; academics; and policy experts-- through various mechanisms, such as the solicitation of papers to help frame the QHSR and a web-based discussion forum. DHS and these stakeholders identified benefits from these consultations, such as DHS receiving varied perspectives. However, stakeholders also identified challenges in the consultation process. Sixteen of 63 stakeholders who provided comments to GAO noted concerns about the time frames for providing input into the QHSR or BUR. Nine DHS stakeholders, for example, responded that the limited time available for development of the QHSR did not allow DHS to have as deep an engagement with stakeholders. Further, 9 other stakeholders commented that DHS consultations with nonfederal stakeholders, such as state, local, and private sector entities, could be enhanced by including more of these stakeholders in QHSR consultations. In addition, reports on the QHSR by the National Academy of Public Administration, which administered DHS's web-based discussion forum, and a DHS advisory committee comprised of nonfederal representatives noted that DHS could provide more time and strengthen nonfederal outreach during stakeholder consultations. By providing more time for obtaining feedback and examining mechanisms to obtain nonfederal stakeholders' input, DHS could strengthen its management of stakeholder consultations and be better positioned to review and incorporate, as appropriate, stakeholders' input during future reviews. DHS identified threats confronting homeland security in the 2010 QHSR report, such as high-consequence weapons of mass destruction and illicit trafficking, but did not conduct a national risk assessment for the QHSR. DHS officials stated that at the time DHS conducted the QHSR, DHS did not have a well-developed methodology or the analytical resources to complete a national risk assessment that would include likelihood and consequence assessments--key elements of a national risk assessment. To develop an approach to national risk assessments, DHS created a study group as part of the QHSR process that developed a national risk assessment methodology. DHS officials plan to implement a national risk assessment in advance of the next QHSR, which DHS anticipates conducting in fiscal year 2013. DHS developed priorities, plans, monitoring mechanisms, and performance measures, but did not consider risk information in making its prioritization efforts. DHS considered various factors in identifying high-priority BUR initiatives for implementation in fiscal year 2012 but did not include risk information as one of these factors, as called for in GAO's prior work and DHS's risk management guidance, because of differences among the initiatives that made it difficult to compare risks across them, among other things. Consideration of risk information during future implementation efforts could help strengthen DHS's prioritization of mechanisms for implementing the QHSR, including assisting in determinations of which initiatives should be implemented in the short or longer term. GAO recommends that for future reviews, DHS provide the time needed for stakeholder consultations, explore options for consulting with nonfederal stakeholders, and examine how risk information could be considered in prioritizing QHSR initiatives. DHS concurred with our recommendations.
As noted earlier, before a rule can become effective, it must be filed in accordance with the statute. GAO conducted a review to determine whether all final rules covered by CRA and published in the Register were filed with the Congress and GAO. We performed this review to both verify the accuracy of our database and to ascertain the degree of agency compliance with CRA. We were concerned that regulated entities may have been led to believe that rules published in the Federal Register were effective when, in fact, they were not unless filed in accordance with CRA. Our review covered the 10-month period from October 1, 1996, to July 31, 1997. In November 1997, we submitted to OIRA a computer listing of the rules that we found published in the Federal Register but not filed with our Office. This initial list included 498 rules from 50 agencies. OIRA distributed this list to the affected agencies and departments and instructed them to contact GAO if they had any questions regarding the list. Beginning in mid-February, because 321 rules remained unfiled, we followed up with each agency that still had rules which were unaccounted for. Our Office has experienced varying degrees of responses from the agencies. Several agencies, notably the Environmental Protection Agency and the Department of Transportation, took immediate and extensive corrective action to submit rules that they had failed to submit and to establish fail-safe procedures for future rule promulgation. Other agencies responded by submitting some or all of the rules that they had failed to previously file. Several agencies are still working with us to assure 100 percent compliance with CRA. Some told us they were unaware of CRA or of the CRA filing requirement. Overall, our review disclosed that: 279 rules should have been filed with us; 264 of these have subsequently 182 were found not to be covered by CRA as rules of particular applicability or agency management and thus were not required to be filed; 37 rules had been submitted timely and our database was corrected; and 15 rules from six agencies have thus far not been filed. We do not know if OIRA ever followed up with the agencies to ensure compliance with the filing requirement; we do know that OIRA never contacted GAO to determine if all rules were submitted as required. As a result of GAO’s compliance audit, however, 264 rules now have been filed with GAO and the Congress and are thus now effective under CRA. In our view, OIRA should have played a more proactive role in ensuring that agencies were both aware of the CRA filing requirements and were complying with them. One area of consistent difficulty in implementing CRA has been the failure of some agencies to delay the effective date of major rules for 60 days as required by section 801(a)(3)(A) of the act. Eight major rules have not permitted the required 60-day delay, including the Immigration and Naturalization Service’s major rule regarding the expedited removal of aliens. Also, this appears to be a continuing problem since one of the eight rules was issued in January 1998. We find agencies are not budgeting enough time into their regulatory timetable to allow for the delay and are misinterpreting the “good cause” exception to the 60-day delay period found in section 808(2). Section 808(2) states that, notwithstanding section 801, “any rule which an agency for good cause finds (and incorporates the finding and a brief statement of reasons therefor in the rule issued) that notice and public procedure thereon are impracticable, unnecessary, or contrary to the public interest” shall take effect at such time as the federal agency promulgating the rule determines. This language mirrors the exception in the Administrative Procedure Act (APA) to the requirement for notice and comment in rulemaking. 5 U.S.C. § 553(b)(3)(B). In our opinion, the “good cause” exception is only available if a notice of proposed rulemaking was not published and public comments were not received. Many agencies, following a notice of proposed rulemaking, have stated in the preamble to the final major rule that “good cause” existed for not providing the 60-day delay. Examples of reasons cited for the “good cause” exception include (1) that Congress was not in session and thus could not act on the rule, (2) that a delay would result in a loss of savings that the rule would produce, or (3) that there was a statutorily mandated effective date. The former administrator of OIRA disagreed with our interpretation of the “good cause” exception. She believed that our interpretation of the “good cause” exception would result in less public participation in rulemaking because agencies would forgo issuing a notice of proposed rulemaking and receipt of public comments to be able to invoke the CRA “good cause” exception. OIRA contends that the proper interpretation of “good cause” should be the standard employed for invoking section 553(d)(3) of the APA, “as otherwise provided by the agency for good cause found and published with the rule,” for avoiding the 30-day delay in a rule’s effective date required under the APA. Since CRA’s section 808(2) mirrors the language in section 553(b)(B), not section 553(d)(3), it is clear that the drafters intended the “good cause” exception to be invoked only when there has not been a notice of proposed rulemaking and comments received. One early question about implementation of CRA was whether Executive agencies or OIRA would attempt to avoid designating rules as major and thereby avoid GAO’s review and the 60-day delay in the effective date. While we are unaware of any rule that OIRA misclassified to avoid the major rule designation, the failure of agencies to identify some issuances as “rules” at all has meant that some major rules have not been identified. CRA contains a broad definition of “rule,” including more than the usual “notice and comment” rulemakings under the Administrative Procedure Act which are published in the Federal Register. “Rule” means the whole or part of an agency statement of general applicability and future effect designed to implement, interpret, or prescribe law or policy. “All too often, agencies have attempted to circumvent the notice and comment requirements of the Administrative Procedure Act by trying to give legal effect to general policy statements, guidelines, and agency policy and procedure manuals. Although agency interpretative rules, general statements of policy, guideline documents, and agency and procedure manuals may not be subject to the notice and comment provisions of section 553(c) of title 5, United States Code, these types of documents are covered under the congressional review provisions of the new chapter 8 of title 5.” On occasion, our Office has been asked whether certain agency action, issuance, or policy constitutes a “rule” under CRA such that it would not take effect unless submitted to our Office and the Congress in accordance with CRA. For example, in response to a request from the Chairman of the Subcommittee on Forests and Public Land Management, Senate Committee on Energy and Resources, we found that a memorandum issued by the Secretary of Agriculture in connection with the Emergency Salvage Timber Sale Program constituted a “rule” under CRA and should have been submitted to the Houses of Congress and GAO before it could become effective. Likewise, we found that the Tongass National Forest Land and Resource Management Plan issued by the United States Forest Service was a “rule” under CRA and should have been submitted for congressional review. OIRA stated that, if the plan was a rule, it would be a major rule. The Forest Service has in excess of 100 such plans promulgated or revised which are not treated as rules under CRA. Many of these may actually be major rules that should be subject to CRA filing and, if major rules, subject to the 60-day delay for congressional review. In testimony before the Senate Committee on Energy and Natural Resources and the House Committee on Resources regarding the Tongass Plan, the Administrator of OIRA stated that, as was the practice under the APA, each agency made its own determination of what constituted a rule under CRA and by implication, OIRA was not involved in these determinations. We believe that for CRA to achieve what the Congress intended, OIRA must assume a more active role in guiding or overseeing these types of agency decisions. Other than an initial memorandum following the enactment of CRA, we are unaware of any further OIRA guidance. Because each agency or commission issues many manuals, documents, and directives which could be considered “rules” and these items are not collected in a single document or repository such as the Federal Register, for informal rulemakings, it is difficult for our Office to ascertain if agencies are fully complying with the intent of CRA. Having another set of eyes reviewing agency actions, especially one which has desk officers who work on a daily basis with certain agencies, would be most helpful. We have attempted to work with Executive agencies to get more substantive information about the rules and to get such information supplied in a manner that would enable quick assimilation into our database. An expansion of our database could make it more useful not only to GAO for its use in supporting congressional oversight work, but directly to the Congress and to the public. Attached to this testimony is a copy of a questionnaire designed to obtain basic information about each rule covered by CRA. This questionnaire asks the agencies to report on such items as (1) whether the agency provided an opportunity for public participation, (2) whether the agency prepared a cost-benefit analysis or a risk assessment, (3) whether the rule was reviewed under Executive orders for federalism or takings implications, and (4) whether the rule was economically significant. Such a questionnaire would be prepared in a manner that facilitates incorporation into our database by electronic filing or by scanning. In developing and attempting to implement the use of the questionnaire, we consulted with Executive branch officials to insure that the requested information would not be unnecessarily burdensome. We circulated the questionnaire for comment to 20 agency officials with substantial involvement in the regulatory process, including officials from OIRA. The Administrator of OIRA submitted a response in her capacity as Chair of the Regulatory Working Group, consolidating comments from all the agencies represented in that group. It is the position of the group that the completion of this questionnaire for each of the 4,000 to 5,000 rules filed each year is too burdensome for the agencies concerned. The group points out that the majority of rules submitted each year are routine or administrative or are very narrowly focused regional, site-specific, or highly technical rules. We continue to believe that it would further the purpose of CRA for a database of all rules submitted to GAO to be available for review by Members of Congress and the public and to contain as much information as possible concerning the content and issuance of the rules. We believe that further talks with the Executive branch, led by OIRA, can be productive and that there may be alternative approaches, such as submitting one questionnaire for repetitive or routine rules. If a routine rule does not fit the information on the submitted questionnaire, a new questionnaire could be submitted for only that rule. For example, the Department of Transportation could submit one questionnaire covering the numerous air worthiness directives it issues yearly. If a certain action does not fit the overall questionnaire, a new one for only that rule would be submitted. We note that almost all agencies have devised their own forms for the submission of rules, some of which are as long or almost as extensive as the form we recommend. Additionally, some agencies prepare rather comprehensive narrative reports on nonmajor rules. We are unable to easily capture data contained in such narrative reports with the resources we have staffing this function now. The reports are systematically filed and the information contained in them essentially is lost. Our staff could, however, incorporate an electronic submission or scan a standardized report into our database and enable the data contained therein to be used in a meaningful manner. CRA gives the Congress an important tool to use in monitoring the regulatory process, and we believe that the effectiveness of that tool can be enhanced. Executive Order 12866 requires that OIRA, among other things, provide meaningful guidance and oversight so that each agency’s regulatory actions are consistent with applicable law. After almost 2 years’ experience in carrying out our responsibilities under the act, we can suggest four areas in which OIRA should exercise more leadership within the Executive branch regulatory community, consistent with the intent of the Executive Order, to enhance CRA’s effectiveness and its value to the Congress and the public. We believe that OIRA should: require standardized reporting in a GAO-prescribed format that can readily be incorporated into GAO’s database; establish a system to monitor compliance with the filing requirement on an ongoing basis; provide clarification on the “good cause” exception to the 60-day delay provision and oversee agency compliance during its Executive Order 12866 review; and provide clarifying guidance as to what is a rule that is subject to CRA and oversee the process of identifying such rules. Thank you, Mr. Chairman. This concludes my prepared remarks. I would be happy to answer any questions you may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed its experience in fulfilling its responsibilities under the Congressional Review Act (CRA). GAO noted that: (1) its primary role under the CRA is to provide Congress with a report on each major rule concerning GAO's assessment of the promulgating federal agency's compliance with the procedural steps required by various acts and Executive orders governing the regulatory process; (2) these include preparation of a cost-benefit analysis, when required, and compliance with the Regulatory Flexibility Act, the Unfunded Mandates Reform Act of 1995, the Administrative Procedure Act, the Paperwork Reduction Act, and Executive Order 12866; (3) GAO's report must be sent to the congressional committees of jurisdiction within 15 calendar days; (4) although the law is silent as to GAO's role relating to the nonmajor rules , GAO believes that basic information about the rules should be collected in a manner that can be of use to Congress and the public; (5) to do this, GAO has established an database that gathers basic information about the 15-20 rules GAO receives on the average each day; (6) GAO's database captures the title, agency, the Regulation Identification Number, the type of rule, the proposed effective date, the date published in the Federal Register, the congressional review trigger date, and any joint resolutions of disapproval that may be enacted; (7) GAO has recently made this database available, with limited research capabilities, on the Internet; (8) GAO conducted a review to determine whether all final rules covered by CRA and published in the Federal Register were filed with Congress and GAO; (9) as a result of GAO's compliance audit, 264 rules have been filed with GAO and Congress and are now effective under CRA; (10) one area of consistent difficulty in implementing CRA had been the failure of some agencies to delay the effective date of major rules for 60 days as required by the act; (11) one early question about implementation of CRA was whether executive agencies or the Office of Information and Regulatory Affairs (OIRA) would attempt to avoid designating rules as major and thereby avoid GAO's review and the 60-day delay in the effective date; and (12) while GAO is unaware of any rule that OIRA misclassified to avoid the major rule designation, the failure of agencies to identify some issuances as rules at all has meant that some major rules have not been identified.
As the United States has become more dependent on foreign sources for crude oil, our energy security has become increasingly intertwined with that of other countries. Crude oil is a global commodity and, as such, any world event that increases instability in crude oil prices reduces energy security for all oil-buying countries in similar ways. Numerous empirical studies have shown a correlation between oil price shocks and economic downturns. When crude oil prices rise, this pushes up prices of petroleum products. Consumers spend more of their income on energy and less on other goods, which can cause an economic slowdown. In addition, since much of the oil is imported, there is a greater flow of funds overseas rather than increased domestic spending. World oil prices have more than doubled since 2003 and are currently higher, when adjusted for inflation, than in any time since the early 1980s. World demand for oil is projected to increase by about 43 percent over the next 25 years—from about 82 million barrels per day in 2004 to about 118 million barrels per day in 2030—with much of the increased demand coming from China and other countries. Some experts believe oil prices will remain high for the foreseeable future as suppliers struggle to increase production to keep up with demand. In this tight demand and supply environment, even small supply disruptions can create large increases in prices. In this way, our energy security is tied to events in all oil-producing countries. Oil was first produced commercially in Venezuela in the early 1900s, and by the late 1920s Venezuela was the world’s second largest producer, after the United States. Today, Venezuela’s 78 billion barrels of proven reserves—crude oil in the ground that geological and engineering data have demonstrated with reasonable certainty is able to be produced using existing technology—are the seventh or eighth largest in the world. Outside of the Persian Gulf, only Canada’s proven reserves are considered greater than Venezuela’s. In 2005, Venezuela was the world’s eighth largest exporter of crude oil. Most of Venezuela’s crude oil that is not consumed domestically in Venezuela is exported to the United States because of its close proximity; additionally, Venezuela owns significant refining assets in the United States and the U.S. Virgin Islands that can refine its heavy sour oil. In the 1980s and 1990s, PDVSA bought CITGO, Inc. and acquired interests in several other U.S. refineries that had the ability or could be reconfigured to refine such crude oil. Today, the refining capacity of PDVSA’s share of the nine U.S. refineries in which it has an interest is about 1.3 million barrels per day. For example, CITGO’s five wholly-owned refineries have a refining capacity of about 750,000 barrels per day and market their refined petroleum products in the United States through about 14,000 independently owned service stations using the CITGO name. In addition, PDVSA partners directly, or through CITGO, with ExxonMobil, Lyondell, ConocoPhillips, and Amerada Hess in the U.S. Virgin Islands. These nine refineries buy most of the crude oil and refined petroleum products exported by Venezuela. While the United States is unique in its capacity to refine large volumes of the heavy crude oil that constitutes a majority of Venezuela’s oil exports, China and other countries, such as Brazil, have plans to build refineries that can process heavy crude oil, which, if built, may create other attractive markets for Venezuela’s oil. In addition, the Venezuelan government has launched several regional initiatives to increase its export base, including (1) PetroCaribe, through which Venezuela offers oil and some refined petroleum products to 14 Caribbean countries with favorable financing, and (2) PetroAndina and PetroSur, which offer oil under similar terms to, respectively, the Andean countries of Colombia, Ecuador, and Bolivia and the South American countries of Brazil, Uruguay, and Argentina. The oil sector in Venezuela consists of a network of oil fields and wells that produce crude oil, refineries to process the crude oil, and an infrastructure to transport the crude and refined products. The bulk of Venezuela’s production comes from the Lake Maracaibo area in the country’s western region and from the Faja area in the Orinoco Belt in the country’s eastern region. The crude oil is processed by PDVSA’s six refineries in Venezuela or is exported to the United States or other countries. Crude oil is shipped by way of 39 oil terminals from Venezuela’s major oil ports, located in the western and eastern regions of the country. Foreign oil companies began producing crude oil in Venezuela in the early 1900s. In 1976, Venezuela nationalized its hydrocarbon industry, bringing control of oil—which is the main source of the country’s wealth—under the control of the national oil company. However, beginning in 1992, the Venezuelan government reopened its petroleum industry to foreign and private Venezuelan oil companies in what was known as the “Apertura.” Between 1992 and 1997, Venezuela signed 32 operating service agreements to allow 22 private Venezuelan, U.S., and other foreign companies to produce oil in fields that were considered, at the time, economically marginal or high risk. The purpose of these 32 operating service agreements was to allow foreign companies to assist PDVSA in producing oil, and the contracts were structured so foreign-company operators did not have any rights over the volumes, reserves, or prices of crude oil but were reimbursed for their costs plus a service fee for production. The Venezuelan government granted the foreign company operators an indefinite “royalty holiday” whereby the companies paid no more than 1 percent royalty on the extracted crude, instead of the maximum of 16-2/3 percent at the time. Also during this period, PDVSA entered into four joint ventures with foreign companies, including ExxonMobil, ConocoPhillips, and ChevronTexaco from the United States, to produce crude oil in the Faja. These joint ventures, whose majority shares were owned by the foreign oil companies, were considered high risk at the time, in part due to the challenges of producing “extra-heavy” sour oil from the Faja, which is among the lowest quality oil commercially produced anywhere in the world. Venezuela’s extra-heavy Faja oil has higher density (is “heavier”) and has a higher sulfur content than most commercially produced crude oil. Commercial production of extra-heavy oil is relatively expensive— pumping it from the ground requires the use of techniques to improve its flow characteristics and readying it for market requires “upgrading” to prepare it for final refining. During upgrading, the extra-heavy crude oil is processed to make it lighter and remove much of its sulfur content. In 1997, foreign companies began to produce extra-heavy sour crude oil in Venezuela’s Faja region, and, by 2005, the four joint ventures were producing about 600,000 barrels per day of Faja crude. The projects in the Faja also paid only 1 percent royalty instead of 16-2/3 percent. Extra-heavy crude from the Faja region is also used to produce Orimulsion, a boiler fuel that is a mixture of bitumen and water. Orimulsion is marketed internationally, especially to China. Effective January 2002, a new law governing Venezuela’s hydrocarbon industry went into effect. The new law increased maximum royalties from 16-2/3 percent to 30 percent, and increased the percentage of ownership by PDVSA in all operating arrangements with foreign and domestic companies to at least 51 percent. In 2005, the Venezuelan government took steps to make foreign and domestic companies migrate from the terms of the existing 32 operating service agreements to the terms of the new law. Essentially, beginning in 2006, the companies that had been paying no more than 1 percent in royalty fees under the operating service agreements had to pay as much as 30 percent. Also, instead of paying 34 percent in income taxes as service providers, the foreign companies had to pay 50 percent as part owners in the joint ventures. If the foreign companies did not comply with the new rules, the Venezuelan government took control of the operations. While the new rules had not been applied to the four joint ventures in the Faja, in March 2005 the Faja projects began paying 16-2/3 percent royalties. Also, in May 2006, the Venezuelan government established a new extraction tax in addition to the 50 percent income tax. According to a Venezuelan spokesperson, the extraction tax is 33.33 percent applied to well production, but royalty fees are deducted from this tax. The Venezuelan tax authority also issued bills for millions of dollars in back taxes to foreign companies conducting production activities under the 32 operating service agreements after the effective date of the law. The oil industry is capital-intensive and heavily dependent on continuous investment to maintain existing wells, establish new wells for crude oil production, and develop and maintain the infrastructure supporting the production network. According to the EIA, PDVSA is Venezuela’s largest employer and accounts for about one-third of the country’s GDP, about 50 percent of the government’s revenue, and 80 percent of Venezuela’s export earnings. PDVSA stated in 2005 that it plans to invest $26 billion to expand its oil production to 5.8 million barrels per day by 2012. After Hugo Chavez was elected president of Venezuela in 1998, responsibility for the oil industry changed. Managerial authority for the petroleum industry was shifted from PDVSA to the Venezuelan Ministery of Energy and Petroleum; the way Venezuela does business with foreign companies also changed, as discussed previously. Domestic resistance to the Chavez administration and the changes in hydrocarbon sector oversight resulted in a 63-day strike by nearly half of PDVSA workers in the winter of 2002–2003. Oil production almost completely stopped, as oil wells stopped pumping, refineries closed, oil tankers stopped running, and storage facilities reached full capacity. The strike caused a temporary decrease in world oil supplies of about 2.3 million barrels per day, an amount equivalent to about 3.0 percent of total world daily oil supply. Venezuela is a founding member of OPEC, which controls about 40 percent of the world’s estimated 84 million barrels of production. Venezuela is the third largest producer within OPEC, according to EIA data. OPEC can wield great power in the international oil market, particularly by setting production quotas for its member countries to raise and lower the supply of oil, thereby influencing world oil prices. During the mid-1990s, Venezuela was suspected of weakening oil prices by producing above the country’s quota. Since Hugo Chavez became President of Venezuela, the Venezuelan government has favored stricter adherence to OPEC quotas, and currently Venezuela is considered a price hawk in the ranks of OPEC, generally favoring production restraint to keep oil prices relatively high. Energy security is a national priority for the United States, and the United States has long had programs and activities designed to foster energy security. The United States government also strives to enhance cooperation with energy consuming and producing governments to mitigate the impact of supply disruptions and to support U.S. and world economic growth. The United States is a member of the International Energy Agency, an organization comprised of Organization of Economic Cooperation and Development countries that was established to cope with oil supply disruptions and coordinate an international response in case of a disruption to the global oil supply market. International Energy Agency member countries hold about 4.1 billion barrels of oil stocks, and for a limited period can release an amount equivalent to 10 percent of global demand each day in case of a disruption. Venezuelan oil production has fallen since 2001, largely as a result of actions by the Venezuelan government. Since that time the production of Venezuelan crude oil decreased in oil fields operated by PDVSA and increased in fields operated by foreign companies, but, as of 2005, increased production by foreign companies was not enough to bring total Venezuelan oil production back to the prestrike level. Despite production declines, exports of crude oil and refined petroleum products to the United States since shortly after the strike have remained close to prestrike levels of about 1.5 million barrels per day. The Venezuelan government announced plans in 2005 to expand its oil production and exports significantly by 2012, but most experts with whom we spoke doubted Venezuela’s ability to implement the expansion plan in the near term. Data from EIA, the International Energy Agency, OPEC, and the Venezuelan government all indicate that Venezuelan crude oil production decreased between 2001 and 2005. For example, EIA data show that production decreased from 3.1 million barrels per day to 2.6 million barrels per day, reflecting a decrease of about .5 million barrels per day, or 16 percent. OPEC, International Energy Agency, and Venezuelan government data all indicate varying but higher levels of production in 2005. While Venezuelan production figures should be the most accurate because they have access to all the production data, many oil industry officials and experts told us that Venezuelan government figures have been overstated. Figure 1 shows production levels for 2001 through 2005 from four sources and illustrates the drop in production as a result of the strike and the recovery following the strike. While there are differences of opinion and uncertainty about the accuracy of available production data, other data also support a significant decline in production. For example, international financial data show that foreign investment in Venezuela declined between 2001 and 2004. Specifically, net foreign direct investment in Venezuela was about $3.5 billion in 2001, declined to almost zero in 2002, and recovered to about $1.9 billion in 2004, the last year for which investment data are available. Because we were unable to obtain reliable, independent data on specific investment in Venezuela’s oil and gas sector, we analyzed total foreign investment in Venezuela as a proxy for the condition of the oil sector. Our analysis indicates a high correlation between Venezuelan oil production and net foreign investments in Venezuela. In addition, experts told us that there is a high correlation between the number of active oil drilling rigs and oil production. However, the number of active rigs fell sharply during and after the strike and, as of 2005, had not returned to their 2001 levels. Specifically, there was an average of 66 active drilling rigs in Venezuela in 2001; the number of rigs fell to as low as 12 during the height of the strike in January 2003; and the average increased to 60 in 2005. This provides further evidence that Venezuela’s oil production has decreased. The Venezuelan government’s firing of thousands of PDVSA employees following the strike contributed to the decline in production. The government dismissed about 40 percent of PDVSA’s approximately 40,000 employees, including many management and technical staff. Experts told us that the loss of managerial and technical expertise caused a rapid decline in the company’s oil production from existing fields. In fact, some said that the loss of expertise was so critical that after the strike, PDVSA was unable to issue invoices for contractor services. Venezuelan officials told us that strikers did deliberate damage to the company and that this sabotage accounts for some of their difficulties since the strike. PDVSA employees with whom we spoke, some of whom were fired and others who resigned, disputed the claims of sabotage and said that strikers had originally planned only a two- or three-day strike, but that the government shut them out before they could return to work. Venezuelan officials acknowledged that the loss of expertise initially hampered operations and said that they have been replacing and training lost workers as quickly as possible. However, many industry experts told us that a black list of former PDVSA managerial and technical staff that the Venezuelan government will not rehire is limiting Venezuela’s ability to acquire the necessary staff to meet its production goals. In addition, officials from foreign oil companies with operations in Venezuela told us that since the strike, PDVSA has become highly politicized and that PDVSA officials are often slow to make key decisions, which have complicated foreign companies’ decisions to invest in the Venezuelan oil sector. Many oil industry officials told us that PDVSA’s lack of managerial and technical expertise still remains one of the biggest challenges in continuing operations in Venezuela with PDVSA as a partner. In addition, experts told us that Venezuela had underinvested in oil field maintenance since the early 1990s, and that this had contributed to PDVSA’s declining production. Data from EIA, the International Energy Agency, OPEC, and the Venezuelan government indicate that, from 2001 through 2005, Venezuelan crude oil production controlled by PDVSA decreased, while production controlled by foreign companies increased. For example, using EIA data as the base for total Venezuelan crude oil production, of 3.1 million barrels of crude oil produced per day in 2001, PDVSA produced about 2.4 million barrels per day (or 77 percent), and foreign companies produced about .7 million barrels per day (or 23 percent). By 2005, these data indicated that of 2.6 million barrels produced per day, PDVSA produced about 1.5 million barrels per day (or about 58 percent), and foreign companies produced about 1.1 million barrels per day (or 42 percent). International Energy Agency, OPEC, and Venezuelan government data show similar trends, but the relative proportion of PDVSA’s production differs because each of these data sources reflects a different total volume of Venezuelan crude oil production. All of the data sources indicate that increases in production by foreign companies were not enough to totally offset decreases in PDVSA’s production, resulting in a net crude oil production loss. Figure 2 shows the increase in foreign companies’ production and decrease in PDVSA’s production for 2001–2005 using EIA’s figures as the base for total production. Since shortly after the Venezuelan strike ended, Venezuela’s exports of crude oil and refined petroleum products to the United States have remained close to the prestrike levels. EIA data show that Venezuelan exports of crude oil and refined petroleum products to the United States (excluding the Virgin Islands) have fluctuated month-to-month, but prior to the strike had averaged about 1.5 million barrels per day. These exports reached a low of about .4 million barrels per day during the strike, but by April 2003 had returned to approximately the average prestrike level. Specifically, EIA data show that such Venezuelan exports averaged between 1.5 million and 1.6 million barrels per day between April 2003 and August 2005, as shown in figure 3. The EIA data also show that Venezuela exports most of its crude oil to the United States. For example, the data show that exports to the United States accounted for about 66 percent of Venezuela’s total exports of crude oil in 2004. Most of Venezuela’s exported crude oil goes to refineries on the U.S. Gulf Coast that are owned wholly or partially by the Venezuelan government. Venezuelan government data show that, like exports to the United States, Venezuelan domestic consumption has remained close to the prestrike level—about .5 million barrels per day. Given that Venezuelan crude oil production has decreased and Venezuelan domestic consumption and exports to the United States have remained relatively constant since shortly after the strike, most of the loss of Venezuelan crude oil must have been absorbed by decreased Venezuelan exports to countries other than the United States. Some oil company officials also told us that in recent years there have been smaller amounts of Venezuelan oil available for purchase on world spot markets, which would also indicate that less Venezuelan oil is going to non-U.S. markets. Venezuelan officials gave us data that showed exports to non-U.S. markets were greater than EIA’s numbers, but we were unable to verify the Venezuelan data. The Venezuelan government announced plans in 2005 to expand its oil production to 5.8 million barrels per day by 2012, which is more than double the figure reported by EIA for 2005. Some industry experts told us that the expansion plan is technically feasible and that Venezuela’s oil revenue in recent years has been sufficient to fund the plan. However, many oil industry officials and experts expressed doubt about the government’s ability to implement the expansion plan in the near term for several reasons. According to Venezuelan officials, as of late 2005, no agreements had been signed or investments made to start implementing the major oil production expansions detailed in the plan; experts told us that, without agreements, the plan will face significant delays, at best. The absence of such deals increases the likelihood that Venezuelan oil production will continue to fall because, given that PDVSA’s own production is in decline, Venezuela needs willing foreign oil company partnership to maintain its current level of oil production. PDVSA has not been able to maintain its own level of oil production in recent years. U.S. and international oil industry officials and experts, as well as Venezuelan government officials, told us that PDVSA faces significant challenges in overcoming the 20 to 25 percent per year rate of production decline in its mature oil fields. Venezuelan officials and other experts told us that Venezuela faces a challenge in overcoming the normal decline in productivity of its older fields, especially in the Maracaibo area where oil production dates back to the 1920s. Future foreign investment is uncertain given the Venezuelan government’s recent decision to unilaterally change its business dealings with foreign companies. Beginning in 2005, the Venezuelan administration took steps to make private Venezuelan and foreign companies producing crude oil under the 32 operating service agreements renegotiate those agreements. Essentially, the new agreements increase the maximum royalty from 16-2/3 percent to 30 percent, increase income taxes from 34 percent to 50 percent, and give PDVSA at least a 51 percent share of the operations covered by the agreement. Oil industry officials and experts have generally reacted negatively to the changes in the agreements. Most company officials we contacted told us that Venezuela’s move to unilaterally impose new agreements increased their risk and eroded the investment climate in Venezuela, likely leading to future production declines. Many oil industry officials and experts told us that the changes in the foreign company participation structure, such as mandating a majority share of the operation for PDVSA, pose investment risks and uncertainty for foreign companies because the Venezuelan government has ultimate control in decisionmaking. When France’s Total and Italy’s Eni oil companies failed to sign new agreements, the Venezuelan government seized control of their operations in April 2006; five other fields were turned over to PDVSA after negotiations, according to the Venezuelan spokesperson. Also, ExxonMobil and Norway’s Statoil chose to sell their minority stakes in smaller fields rather than accept Venezuela’s required changes. Furthermore, in May 2006, the Venezuelan Congress approved a new oil extraction tax. According to the Venezuelan spokesperson, the extraction tax is 33.33 percent applied to well production, with royalty fees deducted from this tax. Venezuela’s decision to spend a significant part of its oil revenues on social programs such as education and health care, instead of reinvesting it in the oil industry, could slow further development of the country’s oil sector. Venezuela’s new hydrocarbon law imposes significant social commitments on PDVSA. Venezuelan government officials told us that they directly spent about $3.7 billion of oil revenues on social programs in 2004 and about $5 billion on social programs in 2005. This spending was in addition to money companies paid to the Venezuelan government as royalties and income taxes, and therefore reduces the amount of funds available for investing in oil production. Future production could be impaired by the Venezuelan government’s preference to use national oil companies from developing countries (such as China) and other geopolitically strategic countries (such as Brazil) as partners to explore and develop new fields in Venezuela, instead of relying on experienced international oil companies. Several oil industry officials and experts told us that national oil companies generally do not have the expertise of the international oil companies to develop heavy oil fields. The potential impacts of a disruption of production and exports of Venezuelan crude oil and petroleum products on world oil prices and on the U.S. economy would depend on the characteristics of the disruption. The greatest impacts would occur if all or most Venezuelan oil were suddenly removed from the world market due to a Venezuelan oil industry shutdown. A Venezuelan oil embargo against the United States would have smaller impacts that would primarily affect the United States. Similarly, if Venezuela shut down its U.S. refineries, the impacts would be felt primarily in the United States. Venezuela would suffer severe economic losses from all three types of disruption, especially a shutdown of its oil production. Given the current tight global supply and demand conditions, a sudden loss of all or most Venezuelan oil from the world market, for example due to a strike, would, all else remaining equal, result in a marked spike in world oil prices and a decrease in the growth rate of the U.S. economy as measured by GDP. Because Venezuela’s economy is so dependent on its oil sector, Venezuela would likely try to restore oil production as quickly as possible following a strike or similar disruption to avoid large losses of export revenues. A model developed for DOE by a contractor, using a hypothetical oil disruption scenario that we developed to resemble the disruption caused by the Venezuelan strike during the winter of 2002–2003, predicted that, by the second month of a disruption, worldwide crude oil prices would temporarily increase by about $11 per barrel—from an assumed pre- disruption price of $55 per barrel to almost $66 per barrel. The increase in world crude oil prices would, in turn, drive up prices of refined petroleum products. Later, as the lost oil was replaced with oil from other sources or production resumed, the price of crude oil would return to the previous level. The model further predicted that the temporary increase in world oil prices caused by a disruption would lower the U.S. GDP by about $23 billion relative to what it would have been otherwise—about $13 trillion. A loss of this magnitude for a given year is likely to cause a small decline in the growth rate of the U.S. economy, but is unlikely to result in a recession. In this analysis, the rate of GDP growth would be about 0.18 percent less than what it would have otherwise been for the year. Our hypothetical disruption scenario lasts only a few months because Venezuela, like any other country that is heavily dependent on oil revenue, is likely to exert a great effort to end any severe disruption of crude oil production. The country’s economy in general, and government revenues in particular, depend heavily on the revenues that the country obtains from petroleum production and exports. For example, oil revenues accounted for between 45 and 50 percent of Venezuelan government revenues in recent years. A severe drop in oil revenues for more than a few months would cripple the economy, resulting in lower economic growth and lost jobs; Venezuelan authorities would consider a prolonged oil industry shut down as a very grave threat to the government and to the country as a whole. Indeed, PDVSA officials told us that they restored most of their lost production during the first few months after the strike. It should be noted that the model somewhat understates the impacts on the United States of a sudden and severe loss of oil from Venezuela because it treats any disruption of oil supplies as equal, regardless of the location or the characteristics of the lost oil. In other words, the model does not differentiate between heavy sour crude oil (such as that produced in Venezuela) and any other type of crude oil—for example, “Arab Medium” (which is Saudi Arabia’s medium-quality crude oil). Thus, the model does not consider the economic cost of replacing, for example, 100,000 barrels of heavy sour oil with the same amount of lighter, sweeter oil. In fact, Arab Medium may cost more than some Venezuelan crude oils because of its higher quality, and because the transportation cost of a barrel of oil from Saudi Arabia is higher than that of a barrel of oil from Venezuela. In addition, there may be an economic penalty associated with some U.S. refineries’ switching from their normal significant reliance on Venezuelan oil to replacement oil from alternative sources. For example, one U.S. oil company that refines Venezuelan crude oil ran its refinery optimization model for us to illustrate the impact of switching crude oil types on its refining costs. Its model showed that replacing a large quantity of the Venezuelan oil that it uses on a regular basis with oil from Mexico and the Middle East would cause a 7 percent drop in the capacity utilization of one of its refineries. This would reduce supplies of petroleum products, putting upward pressure on consumer prices. The DOE contractor who developed the model acknowledged that the model does not account for the effects of higher transportation costs or changes in refinery capacity utilization caused by switching from one type of crude oil to another. He said that higher transportation costs and switching crude oil types could result in larger impacts than the model predicts, but that the price impact of switching crude oil types is not understood well enough to be accurately modeled and is likely to be small. We also did an analysis of the impact of the same hypothetical Venezuelan disruption scenario on world oil price and on U.S. GDP using parameters developed by EIA to evaluate oil price disruptions. EIA has also done similar analyses, including (1) a slightly larger oil supply disruption and (2) an analysis of the impacts of the actual Venezuelan strike. The impacts on the price of oil are quite close in all the analyses. However, the impacts on U.S. GDP vary significantly as a result of differing assumptions about how sensitive the economy is to increases in oil prices. DOE officials told us that the impact of such a disruption on the U.S. economy would likely fall somewhere between the estimates derived in the model and our analysis. The results of the analyses and studies are shown in table 1. An EIA analysis shows (and several industry experts told us) that a Venezuelan oil embargo against the United States would have a smaller impact on oil prices than a sudden and severe drop in production. The impact of an embargo would be smaller because the Venezuelan oil would go to other destinations instead of being taken off of the world market. However, since most replacement supplies are farther away than Venezuela, U.S. oil refiners would experience higher costs and delays in getting oil supplies; such an embargo would therefore increase U.S. consumer prices for gasoline and other petroleum products in the short term. Also, as discussed previously, some U.S. refineries that are designed to handle large amounts of Venezuelan heavy sour crude oil would operate less efficiently if they had to switch to different types of crude oil. EIA’s March 2005 analysis estimated that a Venezuelan oil embargo against the United States would cause the price of West Texas Intermediate crude oil (a commonly used benchmark oil) to increase in the short term by $4 to $6 per barrel from the then-current price of $53 per barrel—an increase of between 8 to 11 percent, as opposed to the 19 to 34 percent increase associated with a sudden and severe loss of oil. The price would rise because the embargo would cause (1) higher transportation costs resulting from longer distances to transport oil from locations farther away than Venezuela; (2) refinery inefficiencies resulting from switching crude oil types; and (3) a market psychology premium reflecting fears of further escalation. The EIA analysis did not quantify the impact of an oil embargo on U.S. prices of gasoline and other refined petroleum products. However, an increase in U.S. crude oil prices by 8 to 11 percent per barrel would raise costs of refined petroleum products to the extent that the increase would be passed on to the consumer. All else being equal, such an increase would add 11 to 15 cents to the price of a gallon of gasoline, assuming the conditions in March 2005. DOE officials told us that their analysis assumes the $4 to $6 per barrel increase would last as long as the disruption. However, adjustments would reduce this price impact over time. Refineries, for example, could reconfigure some of their processes and make other adjustments over time to improve their ability to efficiently handle replacement crude oil types. Transportation costs could also adjust over time. For example, Venezuela likely could switch from the relatively small tankers used for the short haul to the United States to very large tankers to move its oil to more distant locations, thereby helping offset Venezuela’s increased transportation costs for shipping the oil longer distances. A Venezuelan oil embargo against the United States would also affect the Venezuelan economy, but the impact would not be as great as the impact of a sudden loss of oil. According to a U.S. company that produces oil in Venezuela, such an embargo would reduce PDVSA’s oil revenues from between $3–4 billion dollars per year due to the following factors: Refinery operations that Venezuela wholly and partly owns in the United States, which take about 70 percent of Venezuela’s oil exports to the United States, would be adversely affected by the embargo because they would have to obtain crude oil from locations farther away than Venezuela and the replacement crude oil would likely be of a different quality. Venezuela’s crude oil revenues would be adversely affected by the higher cost of transporting oil to locations farther away than the United States market. In addition, oil company officials and industry experts told us that few countries have significant refining capacity that is designed to efficiently process the heavy sour oil from Venezuela. Therefore, it would be difficult for Venezuela to find markets for all the oil it currently exports to the United States. If Venezuela shut down its wholly-owned U.S. refineries, the supply of gasoline and other refined petroleum products made from crude oil would decrease and, correspondingly, the prices of these refined petroleum products in the United States would increase. Venezuela wholly owns five refineries in the United States through its PDVSA subsidiary, CITGO, and these account for about 750,000 barrels per day of refining capacity—4 percent of total U.S. refining capacity. The impacts of shutting down CITGO refineries would continue until the closed refineries were reopened or new sources of refined petroleum products were brought on line. The impacts would be obviously most severe in the United States, although increased demand by U.S. oil companies to buy petroleum products from other countries could cause prices to rise in those countries as well. Venezuela would also lose the profits of these refineries for as long as they were shut down, and could face sanctions by the U.S. government— including freezing Venezuelan assets in the United States—if the closure of the refineries were deemed a threat to U.S. security. We identified no studies of the impacts of oil refinery shutdowns on the prices of refined petroleum products, but a shutdown of several large U.S. refineries as a result of hurricanes Katrina and Rita in 2005 clearly contributed to sharp increases in U.S. fuel prices. For example, Hurricane Katrina caused a shutdown of 879,000 barrels per day, or 5.2 percent of U.S. refining capacity. Figure 4 shows that following hurricanes Katrina and Rita in late August and late September 2005, gasoline prices increased by over $1 per gallon on the U.S. Gulf Coast Wholesale Market. While these price spikes are indicative of what can happen in the event of refinery shutdowns, it must be noted that there were other very important disruption factors that affected these prices—such as major pipeline shutdowns and damage—which make it difficult to isolate the impact of the refinery shutdowns. The U.S. government has programs and activities intended, in part, to ensure a reliable long-term supply of oil from Venezuela and other oil- producing countries to U.S. and world markets; these programs include bilateral technology and information exchange agreements, bilateral investment treaties, and multilateral energy initiatives. However, these programs and activities have not been pursued with regard to Venezuela in recent years. The U.S. government has options to mitigate the impacts of short-term oil disruptions to global oil supplies, such as the disruption caused by the Venezuelan strike. These options include diplomacy to persuade oil-producing countries to increase production and using oil in the U.S. Strategic Petroleum Reserve, with or without the release of oil from other International Energy Agency countries’ strategic reserves. However, none of the U.S. government agencies, and few of the U.S. oil companies that we contacted, have contingency plans specifically to mitigate a Venezuelan oil disruption, although DOE conducts analyses of the effects on the market of potential supply disruptions. The United States has had a bilateral technology and information exchange agreement with Venezuela since 1980, and this agreement was expanded in 1997 to include policy dialogue on topics such as energy data exchange, natural gas policy, and energy efficiency. Also, in the 1990s, the two countries entered negotiations for a bilateral investment treaty and worked together under the multilateral energy initiative to organize hemisphere-wide meetings on energy security. By 2004, however, these programs and activities had been discontinued as the result of strained relations between the two countries and diminished technical capacity in Venezuela. According to DOE, it maintains bilateral technology and information exchange agreements with Venezuela and 21 other oil-producing countries: Angola, Argentina, Australia, Azerbaijan, Brazil, Canada, China, Equatorial Guinea, Kazakhstan, India, Italy, Iraq, Mexico, Norway, Pakistan, Peru, Russia, Saudi Arabia, the United Kingdom, Ukraine, and West Africa/Nigeria. DOE officials told us that bilateral technology and information exchange agreements are generally designed to offer avenues to leverage publicly funded domestic research, accelerate scientific achievement through technical cooperation, and support U.S. economic competitiveness by providing U.S. scientists with opportunities to gain access to (and build upon) other countries’ research. They also said that the agreements with four countries—Venezuela, China, Canada, and Mexico—include provisions for cooperation on oil and natural gas recovery technology that DOE requires be based on joint research of mutual benefit. In the case of Venezuela, the specific purpose of the bilateral technology exchange agreement was to cooperate on oil and gas technology and, after 1997, incorporate policy dialogue on such issues as the exchange of information regarding the design and implementation of energy regulatory systems, the development and evaluation of energy resources and production, and the application of alternative energy sources. DOE headquarters and field staff told us that the technical exchanges between the United States and Venezuela under the agreement were robust. For example, meetings were held about twice annually where technical staff from both countries exchanged information. Since November 21, 2003, however, no formal meetings of the countries’ technical staff have occurred. DOE headquarters and field officials told us they were directed in 2003 by DOE headquarters to stop activities under the agreement to accommodate diplomatic decisions. In addition, DOE officials also told us that the last few technical meetings involved very little exchange of technology information. Specifically, they said that after the Venezuelan government fired a significant number of technical employees following the Venezuelan strike, DOE technical staff had difficulty identifying technical counterparts in Venezuela to maintain activities under the agreement. Venezuelan officials told us that attempts to encourage DOE to continue activities under the technology exchange agreement were unsuccessful. For example, Venezuela sent two letters to DOE in 2005 to arrange meetings between Venezuela’s Minister of Energy and Petroleum and the Secretary of DOE, but DOE’s response to one letter stated that the Secretary of DOE was unable to meet, and, according to the Venezuelan spokesperson, DOE did not respond to the other letter. Also, the Venezuelan spokesperson told us that in November 2003, Venezuela presented DOE with a plan to reactivate projects under the agreement but DOE demonstrated no interest. The spokesperson also said that in March 2006, DOE officials told PDVSA’s vice president of production that DOE would not resume activities under the agreement until the political relationship between Venezuela and the United States improved. DOE officials confirmed this, but said DOE also told PDVSA’s vice president of production that part of the reason activities could not be resumed was because DOE research on technology to extract extra-heavy oil and gas was not a high priority, as it had been at one time, because high energy prices removed the need to subsidize such research. According to Department of State and the Office of the U.S. Trade Representative officials, informal bilateral investment treaty discussions with Venezuela began in 1992 and formal negotiations began in October 1997. The United States has bilateral investment treaties in force with 39 countries, including many oil- and gas-producing countries such as Bolivia, Kazakhstan, Trinidad and Tobago, and the Ukraine. These treaties provide rules on investment protection, binding international arbitration of investment disputes, and repatriation of profits, and assist U.S. companies doing business in foreign countries. In our 1991 report on Venezuelan production and conditions affecting potential future U.S. investment there, we observed that most of the 22 oil companies with whom we spoke during that effort told us that a bilateral investment treaty would help increase their investment protection. In that report, we also noted that an official in the Office of the U.S. Trade Representative said that, in order for negotiations to be successful, Venezuela would have to meet standards set forth in the model U.S. treaty—including provisions prohibiting nationalization of property, providing for repatriation of profits, and providing for international arbitration to resolve disputes. U.S. and Venezuelan government officials said that bilateral investment treaty negotiations broke down in 1999 because of significant policy differences between the two countries. A Venezuelan spokesperson and U.S. officials identified three major differences, including the model treaty provisions relating to performance requirements, such as rules stipulating minimum content requirements and obligations to compensate investors for damage done by internal strife. In May 2001, the U.S. National Energy Policy Development Group recommended that the United States conclude bilateral investment treaty negotiations with Venezuela. Department of State officials told us that later in 2001, when they revisited the issue in response to this recommendation, they made an effort to reengage Venezuela, but the effort proved unsuccessful because of continued major differences between the two countries. Department of State officials said they decided that the probability of negotiating a treaty that contained the high standards the United States expects was very unlikely, and they pursued the treaty no further. Department of State officials told us that in bilateral investment treaty negotiations generally, it is overall policy to insist on the high standards contained in the U.S. model treaty to avoid a dilution of standards across agreements. Many oil company officials and experts said that a bilateral investment treaty could have helped protect oil companies’ investments in Venezuela when the Venezuelan government unilaterally required them to change their existing operating service agreements to comply with the new hydrocarbon law. For example, officials from one U.S. oil company said new agreements that companies were required to sign did not contain provisions allowing international arbitration to settle disputes. The officials said their company was concerned about the fairness of having Venezuelan arbitrators settle disputes between U.S. companies and PDVSA or the Venezuelan government. International arbitration was required under the company’s old agreements, and the current U.S. model bilateral investment treaty provides for it. Some U.S. oil company officials also told us that some companies are considering incorporation in other countries that have bilateral investment treaties with Venezuela, such as the United Kingdom and the Netherlands, because the treaties would help protect their investments. Similarly, some oil experts also told us companies from countries with bilateral investment treaties have assurances that they can repatriate profits if Venezuela seizes control of their operations. In 1994, DOE and the Venezuelan Ministry of Energy and Petroleum became the principal coordinators of what was known as the Hemispheric Energy Initiative. The goal of this activity was to stimulate dialogue and cooperation on energy issues among countries in the Western Hemisphere and identify and promote actions to foster regional interconnections through the development of energy sector projects in the hemisphere. As the coordinators, DOE and Venezuela’s Ministry of Energy and Petroleum organized a series of hemispheric-wide summit meetings to discuss energy cooperation beginning in 1995. For example, at the third hemispheric meeting in Caracas, Venezuela, in January 1998, officials from the 26 countries in attendance agreed to promote policies that facilitated trade in the energy sector and facilitate the development of the energy infrastructure, develop regulatory frameworks that are transparent and predictable, and promote foreign private investment in the sector throughout the hemisphere. DOE officials told us that this initiative ended with the meeting in Mexico in 2002, but that, in 2004, Trinidad offered to host a meeting of hemispheric energy ministers in a less formal setting to discuss energy security. The meeting, which was held in Trinidad and Tobago in April 2004, was organized by DOE and Trinidad, without Venezuela playing a significant role organizationally. The meeting focused on hemispheric energy security and included high-ranking energy officials from 35 countries, including the United States, Canada, Mexico, and Venezuela, as well as other key energy-producing countries from Central and South America. DOE officials told us that, during the meeting, Venezuela’s Minister of Energy and Petroleum met with DOE’s Secretary and agreed that it was very important not to politicize the oil trade between the United States and Venezuela and that both countries recognized the importance of that trade. According to DOE officials, no action has taken place since the meeting in Trinidad and Tobago. According to Department of State and other U.S. government officials, the United States has had historically strong ties to Venezuela with respect to oil issues, and the dialogue between the two countries in the past was robust. But the relationship between the two countries with respect to energy issues has changed in recent years—some energy related activities previously used to foster energy security have been discontinued. For example, DOE officials told us that 3 years have elapsed since the last formal discussion between DOE and the Venezuelan Ministry of Energy and Petroleum regarding energy security. Also, officials in the Commerce Department and in the Office of the U.S. Trade Representative reported there is no current engagement between them and their counterparts in Venezuela regarding energy security. Officials in Department of State headquarters said that they have worked hard for years to build a productive energy relationship with Venezuela by participating in frequent consultations with Venezuelan energy officials, meeting most recently in March 2006. DOE officials also said they have maintained open dialogue with Venezuelan energy officials. Most U.S. oil companies have not relied on assistance from the U.S. government to help with issues in Venezuela in recent years although, according to DOE officials, DOE stays in contact with companies regarding the situation in Venezuela, and senior DOE officials frequently report on the status of U.S. energy investment and overall energy production in Venezuela at senior-level meetings of the U.S. government. The U.S. Ambassador to Venezuela told us he does not have good access to Venezuelan government officials and, correspondingly, it is difficult to help U.S. companies doing business in Venezuela obtain access to Venezuelan officials. Officials in the Departments of Commerce and State, and in the Office of the U.S. Trade Representative, told us companies that might otherwise seek their assistance in negotiating with foreign governments do not do so in Venezuela because the companies do not believe that federal agency intervention would be helpful. For example, an official from the Department of Commerce said that U.S. government involvement would be extremely harmful to the relationship between U.S. companies and their business interests in Venezuela. Officials in several U.S. oil companies told us that the poor bilateral relationship between the United States and Venezuela makes it difficult for them to operate and compete for new investment contracts in Venezuela. Key activities and programs that the U.S. government has used to mitigate the impacts of short-term oil supply disruptions include diplomacy, whereby U.S. government officials negotiate with senior officials in oil- producing countries to increase their supply of crude oil in case of a disruption; using oil in the U.S. Strategic Petroleum Reserve; and coordinating with the International Energy Agency, whose members hold stocks equal to 90 days or more of its net imports to address supply disruptions. Officials in the Department of State and DOE, as the lead agencies in crafting U.S. energy security policy, consult with each other, with other U.S. government agencies (as appropriate), and with U.S. companies doing business in foreign countries to identify potential oil disruptions and craft responses to the disruptions, if necessary. U.S. government agencies used diplomacy to mitigate the impact of the oil disruption resulting from the Venezuelan strike. Anticipating a potential oil supply problem in Venezuela, representatives from key DOE offices began coordinating with the Department of State months before the strike to produce a plan to bring together data and information about possible supply problems and to produce an appropriate response to the potential disruption. The overall effort was headed by the National Security Council and top U.S. government administration officials, with Department of State and DOE officials acting as subject experts. After the strike began, the Department of State and DOE used diplomacy to encourage increases in OPEC member and other countries’ crude oil production by 1.3 million barrels per day. Also, according to DOE officials, DOE officials responsible for coordinating oil supply disruptions responses with the International Energy Agency upgraded their day-to-day contact with emergency response officials at the agency, focusing on the strike’s potential impacts and assessing possible mitigation measures. According to an EIA study, most of the replacement oil came from Mexico and the Middle East, especially Iraq. Not withstanding this success, most oil industry officials and experts, as well as U.S. government officials, said that using diplomacy to obtain additional oil likely would be less effective today because there is less surplus oil production capacity now than there was during the Venezuelan strike. During the Venezuelan strike, as much as 5.6 million barrels per day of spare oil production capacity was available from several regions, including Mexico, West Africa, and the Middle East. Now, experts say that the total world spare production capacity is only about 1 million barrels per day, and most of it is in Saudi Arabia. If the oil balance continues to tighten and surplus production capacity shrinks, increasing production in response to disruptions will be more difficult, if not impossible. Aside from using diplomacy, another tool for mitigating supply disruptions is the use of oil reserves. The U.S. government can use the U.S. Strategic Petroleum Reserve to increase the supply of crude oil available to U.S. refineries in three ways: selling oil from the reserve, exchanging oil from the reserve whereby Reserve oil is replaced at a specified date in the future, and allowing oil companies to delay delivering oil to the reserve. Federal law requires that the drawdown and sale of oil from the Strategic Petroleum Reserve be authorized by the President. However, DOE can authorize an exchange of oil from or a delay in delivery of oil to the Reserve. While no set criteria exist for triggering the release of oil from the reserve in the case of a supply disruption, U.S. agency officials told us that, during any disruption, the Department of State and DOE provide analytical and technical advice through the National Security Council to help the President evaluate his options. U.S. policy makers believe that providing oil during a supply disruption is the most efficient mechanism to counteract the impacts of the disruption. The United States currently maintains about 700 million barrels of crude oil in the U.S. Strategic Petroleum Reserve. If 1.5 million barrels a day were released—the amount of crude oil exported by Venezuela to the United States—the reserve is enough to replace over 450 days of lost Venezuelan oil. During the Venezuelan oil strike, oil was not withdrawn from the U.S. Strategic Petroleum Reserve, mostly because other oil- producing countries increased production by 1.3 million barrels a day. However, the U.S. government allowed U.S. oil companies to delay delivering oil that they were committed to deliver to the U.S. Strategic Petroleum Reserve, which added about 18 million barrels to the U.S. oil supply available to refineries—an amount equivalent to almost 1 day of U.S. oil consumption, or almost 2 weeks of Venezuelan oil exports to the United States. In addition to using the U.S. Strategic Petroleum Reserve to mitigate the impact of a supply disruption, the United States could also benefit if the strategic reserves of International Energy Agency member countries were released. Each International Energy Agency member country is required to hold stocks equal to 90 days or more of its net imports. Presently, International Energy Agency countries hold about 4.1 billion barrels of oil stocks. According to a DOE official, the three countries with the largest government controlled reserves—the United States, Germany, and Japan—are able to release about 8 million barrels a day at the onset of a disruption. This quantity is equal to about 10 percent of total world oil demand. The International Energy Agency also requires member countries to release stocks, restrain demand, and share available oil, if necessary, in the event of a major oil supply disruption. While there are no criteria for triggering the release of oil from the member countries’ reserves, the International Energy Agency has specified arrangements for the coordinated use of a drawdown, the restraint of demand, and other measures that member countries could implement in case of a disruption. Also, International Energy Agency officials say that a disruption of 7 percent or more of world supply is a de facto trigger. During the Venezuelan strike, the Department of State and DOE maintained steady diplomatic contact with members of the International Energy Agency to discuss the evolving situation and to share concerns in case a drawdown of member reserves was deemed necessary. A later International Energy Agency analysis of the Venezuelan disruption concluded that, although International Energy Agency member-country stocks were not used during the Venezuelan disruption, the presence of the International Energy Agency stocks played an important role in reassuring the market. Furthermore, the availability of government stocks muted speculation on the markets, according to an International Energy Agency analysis of the disruption. Although the U.S. government has options to mitigate impacts of short- term oil disruptions on crude oil and petroleum products prices, these mitigating actions are not designed to address a long-term loss of Venezuelan oil from the world market. If Venezuela fails to maintain or expand its current level of production, the world oil market may become even tighter than it is now, putting further pressure on both the level and volatility of energy prices. In this context, the United States faces challenges in the coming years that may require hard choices regarding energy sources, foreign relations and energy-related diplomacy, and the amount of energy Americans use. Officials in the four U.S. government agencies we contacted said they do not have contingency plans to deal with oil losses specifically from Venezuela or any other single country. Officials at the lead agencies for energy security, the Department of State and DOE, said they do not have specific plans because the available mechanisms to mitigate the impacts of an oil disruption—diplomacy to persuade oil-producing countries to increase production and using oil from the U.S. Strategic Petroleum Reserve—are adequate to deal with disruptions from any source. According to DOE officials, it conducts scenario analyses of the vulnerabilities of disruptions from certain countries and relies on these options to deal with disruptions. They said that these options have been proven to be adequate. Officials in most oil companies we contacted also said they do not have plans to deal specifically with a disruption of Venezuelan oil because, as with any oil disruption, if a Venezuelan oil disruption were to occur they would replace the lost oil with oil from other sources. The officials said that oil is a fungible commodity and typically available on the spot market. During the Venezuelan strike, for example, U.S. refiners replaced Venezuelan crude oil with crude oil from other sources, including Mexico, Brazil, Russia, Ecuador, and the Middle East. We provided the Departments of State and Commerce, DOE, and the Office of the U.S. Trade Representative with a draft of this report for their review and comment. The Department of State and the Office of the U.S. Trade Representative told us that they generally agreed with the findings of the report but did not provide written comments. DOE and the Department of Commerce provided written comments. The Department of Commerce agreed with the report’s overall findings; Commerce’s letter is reproduced in appendix II. DOE neither agreed nor disagreed with the report’s overall findings, noting that the United States has had a long and mutually beneficial relationship with Venezuela and that our report makes valuable points regarding the challenges facing Venezuelan crude oil production. However, DOE raised two issues that it contends provide an “alarmist view” of U.S. energy security. DOE’s concerns and our response to them are summarized below; DOE’s letter is reproduced in appendix III. All four agencies also provided technical comments, which we incorporated as appropriate. DOE’s first concern is that a $23 billion loss to U.S. GDP, which we reported and attributed to a model developed for DOE by a contractor, is misleading and will be taken out of context because the prediction does not take into account mitigating factors that could influence the impact of an oil disruption on U.S. GDP. Specifically, DOE said that the prediction does not take into account worldwide response to an oil supply disruption, the availability of Arab Heavy oil to replace lost Venezuelan heavy oil, and the ability to use the U.S. Strategic Petroleum Reserve and worldwide stocks to mitigate the impact of a disruption. We disagree that our reporting of the model results is misleading or out of context and believe all the mitigating factors raised by DOE have been addressed in our report. Contrary to DOE’s assertion, the model that predicted the $23 billion loss incorporates the worldwide response and availability of replacement oil from surplus production capacity, such as Arab Heavy oil. However, as our report notes, because there is much less surplus capacity available today than there was in winter 2002-2003 when a similar disruption occurred as a result of the Venezuelan strike, relying on surplus capacity would not be as effective as it was at that time. Also, our report discusses in detail the options the U.S. government has to mitigate the impacts of an oil disruption, including using strategic petroleum reserves, either unilaterally or in concert with other countries. DOE also states that the report does not contain an analysis of the impact of a Venezuelan oil supply disruption on that country’s economy. We disagree with this assertion. Our report discusses the severe impact a Venezuelan oil disruption would have on that country’s economy—the Venezuelan national oil company is the country’s largest employer, and accounts for a third of Venezuela’s GDP, four fifths of its export revenue, and half of government revenue—and notes that Venezuela would likely take steps to correct any such disruption as soon a possible to avoid that impact. DOE’s second concern is that by focusing on the discontinuation of bilateral programs with Venezuela our report leads the reader to believe that such programs could guarantee U.S. energy security. We disagree; nowhere in the report do we imply that such programs with Venezuela could guarantee the United States’ energy security. On the contrary, we point out that instability in Venezuela’s oil sector exists in a broader context of tightening global oil supply and demand balance and that instability of any significant individual oil-producing country can have a significant impact on U.S. and world energy security. Further we report that a number of factors create energy security concerns, including a reduction in global surplus oil production capacity in recent years, the fact that much of the world’s supply of oil is in relatively unstable regions, and rapid growth in world oil demand that has led to a tight balance between demand and supply. DOE also states that our report does not address the comprehensive actions the U.S. is taking domestically and internationally to ensure energy security. While a comprehensive assessment of U.S. energy security was beyond the scope of this report, our report nonetheless notes that the United States has long had a number of programs and activities designed to ensure energy security. For example, for those initiatives identified as within the scope of our report, we listed the 21 other countries with which the U.S. government has negotiated bilateral technology and information exchange agreements. Overall, we disagree that our report, as written, presents an “alarmist view” of U.S. energy security. We point out that oil supply disruptions can have adverse economic impacts but that the U.S. government has options to mitigate such impacts. However, we also point out that these mitigating options are only designed for short-term disruptions and there remain potential long-term concerns with regard to Venezuelan oil supply in the event that Venezuelan oil production continues to fall. We are sending copies of this report to interested congressional committees, the Secretary of Energy, the Secretary of State, the United States Trade Representative, and the Secretary of Commerce. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512-3841 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. The Chairman of the Senate Committee on Foreign Relations asked us to answer the following questions: (1) How have Venezuela’s production of crude oil and exports of crude oil and refined petroleum products to the United States changed in recent years, and what are the future prospects? (2) What are the potential impacts of a reduction in Venezuelan oil exports, a Venezuelan embargo on oil exports to the United States, or sudden closure of Venezuela’s refineries in the United States? (3) What is the status of U.S. government programs and activities to ensure a reliable supply of oil from Venezuela and to mitigate the impacts of a supply disruption? We used a number of methodological techniques to address these issues. To address the first objective, we reviewed studies and analyses of the Venezuelan oil sector and its history. We met with officials from 10 U.S. and multinational oil companies, eight refiners, and two service companies; industry experts from the International Energy Agency, the Center for Strategic and International Studies, the National Petrochemical and Refiners Association, an international energy consulting firm, and other institutions; and officials from the Department of Energy (DOE), Department of State, Department of Commerce, the Office of the United States Trade Representative, the U.S. Geological Survey, and various other U.S. government agencies. In addition, we visited Caracas, Venezuela, and met with the U.S. Ambassador and embassy staff; Venezuela’s Minister of Energy and Petroleum; Petroleos de Venezuela S.A. (PDVSA) officials, including the president, the vice president of production, and a number of PDVSA board members and senior managers; the Venezuelan Auditor General; members of the financial community; and other individuals with expertise in the oil sector of Venezuela. We met with operations officials at various oil exploration, production, and refining centers in the Maracaibo and Faja regions of Venezuela. Both in the United States and in Venezuela, we spoke with numerous former PDVSA employees, executives, and directors, and oil company officials. We also collected, evaluated the reliability of, and analyzed data on Venezuelan production, consumption, and exports of oil and petroleum products. The sources of our data include U.S. government agencies, especially the Energy Information Administration (EIA); the International Energy Agency; the Venezuelan government and PDVSA; and other governmental and private sources. We deemed these data to be reliable for the purposes of addressing our objectives. Regarding Venezuela’s plans for future production, we analyzed plans and data provided by the Ministry of Energy and Petroleum and PDVSA officials. We also discussed the feasibility of Venezuela implementing its plans with Department of State and DOE officials, as well as with numerous oil company officials and industry experts. To address the second objective, we reviewed several studies of the impacts of oil disruptions, including the impact of the Venezuelan strike in the winter of 2002–2003. We also analyzed current conditions in the world oil market to evaluate what might occur if a similar disruption occurred today. We also evaluated the potential impacts of—(1) a sudden and severe drop in Venezuelan oil exports from the world market, (2) a sudden diversion of oil from the United States to other markets through an embargo, and (3) the closure by Venezuela of its wholly-owned U.S.-based refineries. Specifically, we asked a DOE contractor at the Oak Ridge National Laboratory to use an economic oil-disruption model to analyze the impacts of a hypothetical Venezuelan oil disruption on world oil prices and on the U.S. gross domestic product (GDP). For this analysis we constructed a hypothetical disruption scenario similar to the one that actually occurred during the Venezuelan oil strike in the winter of 2002– 2003, but using assumptions regarding market and economic conditions closer to those that prevailed at the time of the analysis (late 2005). We also conducted our own analysis of the same scenario using EIA’s oil disruption rules of thumb that predict how oil prices and the U.S. GDP respond to disruptions in world oil supplies. For the analyses of the potential impacts of a Venezuelan embargo against the United States, we relied largely on EIA analyses. For the impacts of Venezuela’s sale or closure of its CITGO refineries in the United States, we analyzed the response of gasoline prices to the major loss of refinery capacity that accompanied hurricanes Katrina and Rita in 2005. In addition, we discussed the impact of potential Venezuelan oil disruptions with numerous industry experts in Venezuela and in the United States; officials in the Departments of State and Commerce, and DOE; and International Energy Agency officials. To address the third objective, we met with officials at various U.S. government agencies, including the Departments of State and Commerce, DOE, and the Office of the U.S. Trade Representative, to identify the status of programs and activities to ensure a continued supply of oil and to mitigate a disruption of imports of crude oil and refined petroleum products from Venezuela, as well as to determine whether the agencies have Venezuelan-specific contingency plans. We also met with officials of 10 U.S. and multinational oil companies, eight refiners, and two service companies; industry experts from the International Energy Agency, the Center for Strategic and International Studies, the National Petrochemical and Refiners Association; Purvin and Gertz; and other institutions. In addition, we obtained information on Venezuelan decrees and legislation governing foreign investment in the petroleum industry. We reviewed our previous work on U.S. energy security, especially our 1991 study, “Venezuelan Energy: Oil Production and Conditions Affecting Potential Future U.S. Investment.” Because the Department of State advised us that visiting port facilities may be considered too sensitive to the Venezuelan government given that government’s apprehension about the U.S. government, we did not assess port or other facilities for vulnerability to sabotage or attack. However, the Coast Guard, as part of its port security responsibilities, identifies countries that are not maintaining effective antiterrorism measures. According to Coast Guard officials, Venezuela has not been identified as such a country. This report focuses on federal programs and activities related to U.S. energy security. Diplomatic and political actions that may impact U.S. energy security may be undertaken for a multitude of foreign policy goals that are beyond the scope of this report. Therefore, our evaluation of programs and activities related to energy security is in no way intended to evaluate the U.S. government’s approach to these broader goals. Department of State officials reviewed a draft of our report to ensure we did not include information in our report that could influence diplomatic relations. To obtain the official Venezuelan government position on questions relating to all three objectives, we made arrangements with the Venezuelan Embassy in Washington, D.C., for an official spokesperson. Generally, we submitted questions to the spokesperson who then asked for answers and explanations from the appropriate officials in Venezuela and provided the answers to us, usually in writing. In addition, the spokesperson made several presentations to provide information on Venezuela’s oil sector. We did not verify the information provided by the spokesperson. In addition, we did not independently review Venezuelan laws and decrees, and relied on secondary sources such as interviews. We performed our work from March 2005 through May 2006 in accordance with generally accepted government auditing standards. In addition to the individual named above, Philip Farah, Byron S. Galloway, Carol Kolarik, Michelle Munn, Cynthia Norris, Melissa Arzaga Roye, Frank Rusco, and Barbara Timmerman made key contributions to this report.
Venezuela is the world's eighth-largest oil exporter and among the top 10 countries in total proven oil reserves. Venezuela also supplies about 11 percent of current U.S. imports of crude oil and petroleum products and wholly owns five refineries in the U.S. Consequently, Venezuela is a key player in the future energy security of the United States and the world. The current global oil market is tight and may be more susceptible to short-term supply disruptions and higher and more volatile prices. Recently, tension between Venezuela and the United States has caused concern about the stability of Venezuelan oil supplies. On several occasions, Venezuela's President has threatened to stop exporting oil to the U.S. or to close Venezuela's U.S.-based refineries. In this context, GAO analyzed: (1) how Venezuela's crude oil production and exports of crude oil to the U.S. has changed in recent years, (2) the potential impacts of a reduction in Venezuelan oil exports to the U.S., and (3) the status of U.S. government programs and activities to ensure a reliable supply of oil from Venezuela. Commenting on a draft of the report, the State and Commerce Departments generally agreed with the report, but DOE contended that the report presents an "alarmist view" of U.S. energy security. We disagree and believe the report presents a contextually balanced treatment of the issue. Venezuelan oil production has fallen since 2001, but exports of crude oil and petroleum products to the United States have been relatively stable--except during a 2-month strike in the winter of 2002-2003, during which the oil sector was virtually shut down and exports to the United States fell by about 1.2 million barrels. Energy Information Administration data show that total Venezuelan oil production in 2001 averaged about 3.1 million barrels per day, but by 2005 had fallen to about 2.6 million barrels per day. Following the strike, Venezuela's President ordered the firing of up to 40 percent of Venezuela's national oil company employees. U.S. and international oil industry experts told us that the resulting loss of expertise contributed to the decline in oil production. In 2005, the Venezuelan government announced plans to expand its oil production significantly by 2012, but oil industry experts doubt the plan can be implemented because Venezuela has not negotiated needed deals with foreign oil companies as called for in the plan. A model developed for the Department of Energy estimates that a 6-month disruption of crude oil with a temporary loss of up to 2.2 million barrels per day--about the size of the loss during the Venezuelan strike--would, all else remaining equal, result in a significant increase in crude oil prices and lead to a reduction of up to $23 billion in U.S. gross domestic product. A Venezuelan oil embargo against the United States would increase consumer prices for petroleum products in the short-term because U.S. oil refiners would experience higher costs getting replacement supplies. A shutdown of Venezuela's wholly-owned U.S. refineries would increase petroleum product prices until closed refineries were reopened or new sources were brought on line. These disruptions would also seriously hurt the heavily oil-dependent Venezuelan economy. U.S. government programs and activities to ensure a reliable supply of oil from Venezuela have been discontinued, but the U.S. government has options to mitigate short-term oil disruptions. For example, activities under a U.S.-Venezuela oil technology and information exchange agreement were stopped in 2003, in part, as a result of diplomatic decisions. In recent years, U.S. oil companies have not sought assistance from the U.S. government with issues in Venezuela because the companies do not believe that federal agency intervention would be helpful at this time. To mitigate short-term oil supply disruptions, the U.S. government could attempt to get oil-producing nations to increase their production to the extent possible, or could release oil from the U.S. Strategic Petroleum Reserve. While these options can mitigate short-term oil supply disruptions, long-term reductions in Venezuela's oil production and exports are a concern for U.S. energy security, especially in light of current tight supply and demand conditions in the world oil market. If Venezuela fails to maintain or expand its current level of production, the world oil market may become even tighter than it is now, putting further pressure on both the level and volatility of energy prices.
Our 2012 annual report identified 51 areas where unnecessary duplication, overlap, or fragmentation exists as well as additional opportunities for potential cost savings or enhanced revenues. We identified about 130 specific actions that Congress or the executive branch could take to address these areas. We identified 32 areas where government missions are fragmented across multiple agencies or programs; agencies, offices, or initiatives may have similar or overlapping objectives or may provide similar services to similar populations or target similar users; and when two or more agencies or programs are engaged in the same activities or provide the same services to the same beneficiaries (see table 1). We found instances where multiple government programs or activities have led to inefficiencies, and we determined that greater efficiencies or effectiveness might be achievable. Overlap and fragmentation among government programs or activities can be harbingers of unnecessary duplication. In many cases, the existence of unnecessary duplication, overlap, or fragmentation can be difficult to determine with precision due to a lack of data on programs and activities. Where information has not been available that would provide conclusive evidence of duplication, overlap, or fragmentation, we often refer to “potential duplication” and, where appropriate, we suggest actions that agencies or Congress could take to either reduce that potential or to make programmatic data more reliable or transparent. In some instances of duplication, overlap, or fragmentation, it may be appropriate for multiple agencies or entities to be involved in the same programmatic or policy area due to the nature or magnitude of the federal effort. Among the 32 areas highlighted in our 2012 annual report are the following examples of opportunities for agencies or Congress to consider taking action to reduce unnecessary duplication, overlap, or fragmentation. Unmanned Aircraft Systems: The Department of Defense (DOD) estimates that the cost of current Unmanned Aircraft Systems (UAS) acquisition programs and related systems will exceed $37.5 billion in fiscal years 2012 through 2016. The elements of DOD’s planned UAS portfolio include unmanned aircraft, payloads (subsystems and equipment on a UAS configured to accomplish specific missions), and ground control stations (equipment used to handle multiple mission aspects such as system command and control). We have found that ineffective acquisition practices and collaboration efforts in DOD’s UAS portfolio creates overlap and the potential for duplication among a number of current programs and systems. We have also highlighted the need for DOD to consider commonality in UAS—using the same or interchangeable subsystems and components in more than one subsystem to improve interoperability of systems—to reduce the likelihood of redundancies in UAS capabilities. Military service-driven requirements—rather than an effective departmentwide strategy—have led to overlap in DOD’s UAS capabilities, resulting in many programs and systems being pursued that have similar flight characteristics and mission requirements. Illustrative of the overlap, the Department of the Navy (Navy) plans to spend more than $3 billion to develop the Broad Area Maritime Surveillance UAS, rather than the already fielded Air Force Global Hawk system on which it was based. According to the Navy, its unique requirements necessitate modifications to the Global Hawk airframe, payload interfaces, and ground control station. However, the Navy program office was not able to provide quantitative analysis to justify the variant. According to program officials, no analysis was conducted to determine the cost effectiveness of developing the Broad Area Maritime Surveillance UAS to meet the Navy’s requirements versus buying more Global Hawks. The potential for overlap also exists among UAS subsystems and components, such as sensor payloads and ground control stations. DOD expects to spend about $9 billion to buy 42 UAS sensor payloads through fiscal year 2016. While the fact that some multiservice payloads are being developed shows the potential for collaboration, the service-centric requirements process still creates the potential for overlap, including 29 sensors in our review. Further, we identified overlap and potential duplication among 10 of 13 ground control stations that DOD plans to acquire at a cost of about $3 billion through fiscal year 2016. According to a cognizant DOD official, the associated software is about 90 percent duplicative because similar software is developed for each ground control station. DOD has created a UAS control segment working group, which is chartered to increase interoperability and enable software re-use and open systems. This could allow for greater efficiency, less redundancy, and lower costs, while potentially reducing levels of contractor proprietary data that cannot be shared across UAS programs. However, existing ground control stations already have their own architecture and migration to a new service-oriented architecture will not happen until at least 2015, almost 6 years after it began. DOD plans to significantly expand the UAS portfolio through 2040, including five new systems in the planning stages that are expected to become formal programs in the near future. While DOD has acknowledged that many UAS systems were acquired inefficiently and has begun to take steps to improve outcomes as it expands these capabilities over the next several years, the department faces challenges in its ability to improve efficiency and reduce the potential for overlap and duplication as it buys UAS capabilities. For example, the Army and Navy are planning to spend approximately $1.6 billion to acquire separate systems that are likely to have similar capabilities to meet upcoming cargo and surveillance requirements. DOD officials state that current requirements do not preclude a joint program to meet these needs, but the Army and Navy have not yet determined whether such an approach will be used. To reduce the likelihood of overlap and potential duplication in its UAS portfolio, we have made several prior recommendations to DOD which have not been fully implemented. While DOD generally agreed with our recommendations, the overlap in current UAS programs, as well as the continued potential in future programs, shows that DOD must still do more to implement them. In particular, we have recommended that DOD (1) re-evaluate whether a single entity would be better positioned to integrate all crosscutting efforts to improve the management and operation of UAS; (2) consider an objective, independent examination of current UAS portfolio requirements and the methods for acquiring future unmanned aircraft; and (3) direct the military services to identify specific areas where commonality can be achieved. We believe the potential for savings is significant and with DOD’s renewed commitment to UAS for meeting new strategic requirements, all the more imperative. Housing assistance: In fiscal year 2010, the federal government incurred about $170 billion in obligations for housing-related programs and estimated revenue forgone for tax expenditures of which tax expenditures represent $132 billion (about 78 percent). Support for homeownership in the current economic climate has expanded dramatically with nearly all mortgage originations having direct or indirect federal assistance. The Department of the Treasury (Treasury) and the Board of Governors of the Federal Reserve System together invested more than $1.67 trillion in Fannie Mae and Freddie Mac, the government-sponsored enterprises, which issue and guarantee mortgage-backed securities. Examining the benefits and costs of housing programs and tax expenditures that address the same or similar populations or areas, and potentially consolidating them, could help mitigate overlap and fragmentation and decrease costs. We identified 20 different entities that administer 160 programs, tax expenditures, and other tools that supported homeownership and rental housing in fiscal year 2010. In addition, we identified 39 programs, tax expenditures, and other tools that provide assistance for buying, selling, or financing a home and eight programs and tax expenditures that provide assistance to rental property owners. We found overlap in products offered and markets served by the Department of Agriculture’s (USDA) Rural Housing Service (RHS) and the Department of Housing and Urban Development’s (HUD) Federal Housing Administration (FHA), among others. In September 2000 and again as part of our ongoing work, we questioned the need for maintaining separate programs for rural areas. In September 2000, we recommended that Congress consider requiring USDA and HUD to examine the benefits and costs of merging programs, such as USDA’s and HUD’s single-family guaranteed loan and multifamily portfolio management programs. While USDA and HUD have raised concerns about merging programs, our recent work has shown increased evidence of overlap and that some RHS and FHA programs can be consolidated. For example, the two agencies overlap in products offered (mortgage credit and rental assistance), functions performed (portfolio management and preservation), and geographic areas served. Specifically, RHS and HUD guarantee single-family and multifamily loans, as well as offer rental subsidies using similar income eligibility criteria. And, both agencies have been working to maintain and preserve existing multifamily portfolios. Although RHS may offer its products only in rural areas, it is not always the insurer of choice in those areas. For example, in fiscal year 2009 FHA insured over eight times as many single-family loans in economically distressed rural counties as RHS guaranteed. And, many RHS loan guarantees financed properties near urban areas—56 percent of single- family guarantees made in fiscal year 2009 were in metropolitan counties. Regarding consolidation, we found that RHS relies on more in-house staff to oversee its single-family and multifamily loan portfolio of about $93 billion than HUD relies on to manage its single-family and multifamily loan portfolio of more than $1 trillion, largely because of differences in how the programs are administered. RHS has a decentralized structure of about 500 field offices that was set up to interact directly with borrowers. RHS relies on over 1,600 full-time equivalent staff to process and service its direct single-family loans and grants. While RHS limits its direct loans to low income households and its guaranteed loans to moderate income households, FHA has no income limits and does not offer a comparable direct loan program. HUD operates about 80 field offices and primarily interacts through lenders, nonprofits, and other intermediaries. RHS and FHA programs both utilize FHA-approved lenders and underwriting processes based on FHA’s scorecard—an automated tool that evaluates new mortgage loans. RHS has about 530 full-time equivalent staff to process its single-family guaranteed loans. FHA relies on lenders to process its loans. Although FHA insures far more mortgages than RHS guarantees, FHA has just over 1,000 full-time equivalent staff to oversee lenders and appraisers and contractors that manage foreclosed properties. While the number of RHS field offices decreased by about 40 percent since 2000, its decentralized field structure continues to reflect the era in which it was established—the 1930s, when geography and technology greatly limited communication and transportation. These limitations have diminished and HUD programs can be used in all areas of the country. We first recommended in September 2000 —and have followed up since then—that Congress consider requiring USDA and HUD to examine the benefits and costs of merging those programs that serve similar markets and provide similar products, and require these same agencies to explore merging their single family insured lending and multifamily portfolio management programs. At that time, USDA stated that some of the suggestions made in our report to improve the effectiveness of current programs may better serve rural areas. However, USDA also stated that the gap in housing affordability between rural and urban areas, as well as the importance of rural housing programs to the Department’s broader Rural Development mission area, would make merging RHS’s programs with HUD’s programs unfeasible and detrimental to rural America. HUD also stated that it believes any opportunity to improve the delivery of rural housing services should be explored, but stated that the differences between RHS’s and FHA’s single-family programs are sizable and that without legislative changes to product terms, efforts to merge the programs would likely result in a more cumbersome rather than a more efficient delivery system. HUD added that it had been working with USDA in a mutual exchange of information on best practices and would explore possible avenues of coordination. The agencies have been working to align certain requirements of the various multifamily housing programs. In addition, in February 2011, the Administration reported to Congress that it would establish a task force to evaluate the potential for coordinating or consolidating the housing loan programs of HUD, USDA, and the Department of Veterans Affairs (VA). According to HUD, a benchmarking effort associated with the task force was recently begun. Our ongoing work considers options for consolidating these programs and we expect to make additional related recommendations. Furthermore, Treasury and the Internal Revenue Service (IRS) provide numerous types of housing assistance through tax expenditures. Although often necessary to meet federal priorities, some tax expenditures can contribute to mission fragmentation and program overlap that, in turn, can create service gaps, additional costs, and the potential for duplication. For example, to qualify for a historic preservation tax credit, rehabilitation must preserve historic character, which may conflict with states’ efforts to produce energy-efficient, low-income properties with tax credits, and could increase project costs. We recommended in September 2005 and reiterated in March 2011 that coordinated reviews of tax expenditures with related spending programs could help policymakers reduce overlap and inconsistencies and direct scarce resources to the most-effective or least-costly methods to deliver federal support. Specifically, we recommended that the Director of OMB, in consultation with the Secretary of the Treasury, develop and implement a framework for conducting performance reviews of tax expenditures. OMB, citing methodological and conceptual issues, disagreed with our 2005 recommendations. To date, OMB had not used its budget and performance review processes to systematically review tax expenditures and promote integrated reviews of related tax and spending programs. However, in its fiscal year 2012 budget guidance, OMB instructed agencies, where appropriate, to analyze how to better integrate tax and spending policies with similar objectives and goals. The GPRA Modernization Act of 2010 also envisions such an approach for selected cross-cutting areas. Such an analysis could help identify redundancies. Military and veterans health care: We found that DOD and VA need to improve integration across care coordination and case management programs to reduce duplication and better assist servicemembers, veterans, and their families. DOD and VA have care coordination and case management programs that are intended to provide continuity of care for wounded, ill, and injured servicemembers and veterans. DOD and VA established the Wounded, Ill, and Injured Senior Oversight Committee (Senior Oversight Committee) to address identified problems in providing care to wounded, ill, and injured servicemembers as well as veterans. Under the purview of this committee, the departments developed the Federal Recovery Coordination Program (FRCP), a joint program administered by VA that was designed to coordinate clinical and nonclinical services for “severely” wounded, ill, and injured servicemembers—who are most likely to be medically separated from the military—across DOD, VA, other federal agencies, states, and the private sector. Separately, the Recovery Coordination Program (RCP) was established in response to the National Defense Authorization Act for Fiscal Year 2008 to improve the care, management, and transition of recovering servicemembers. It is a DOD-specific program that was designed to provide nonclinical care coordination to “seriously” wounded, ill, and injured servicemembers, who may return to active duty unlike those categorized as “severely” wounded, ill, or injured. The RCP is implemented separately by each of the military services, most of which have implemented the RCP within their existing wounded warrior programs. As a result of these multiple efforts, many recovering servicemembers and veterans are enrolled in more than one care coordination or case management program, and they may have multiple care coordinators and case managers, potentially duplicating agencies’ efforts and reducing the effectiveness and efficiency of the assistance they provide. For example, recovering servicemembers and veterans who have a care coordinator also may be enrolled in one or more of the multiple DOD or VA programs that provide case management services to “seriously” and “severely” wounded, ill, and injured servicemembers, veterans, and their families. These programs include the military services’ wounded warrior programs and VA’s Operation Enduring Freedom/Operation Iraqi Freedom Care Management Program, among others. We found that inadequate information exchange and poor coordination between these programs have resulted in not only duplication of effort, but confusion and frustration for enrollees, particularly when case managers and care coordinators duplicate or contradict one another’s efforts. For example, an FRCP coordinator told us that in one instance there were five case managers working on the same life insurance issue for an individual. In another example, an FRCP coordinator and an RCP coordinator were not aware the other was involved in coordinating care for the same servicemember and had unknowingly established conflicting recovery goals for this individual. In this case, a servicemember with multiple amputations was advised by his FRCP coordinator to separate from the military in order to receive needed services from VA, whereas his RCP coordinator set a goal of remaining on active duty. These conflicting goals caused considerable confusion for this servicemember and his family. DOD and VA have been unsuccessful in jointly developing options for improved collaboration and potential integration of the FRCP and RCP care coordination programs, although they have made a number of attempts to do so. Despite the identification of various options, no final decisions to revamp, merge, or eliminate programs have been agreed upon. The need for better collaboration and integration extends beyond the FRCP and RCP to also encompass other DOD and VA case management programs, such as DOD’s wounded warrior programs that also serve seriously and severely wounded, ill, and injured servicemembers and veterans. In October 2011, we recommended that the Secretaries of Defense and Veterans Affairs direct the co-chairs of the Senior Oversight Committee to expeditiously develop and implement a plan to strengthen functional integration across all DOD and VA care coordination and case management programs that serve recovering servicemembers, veterans, and their families, including—but not limited to—the FRCP and RCP. DOD and VA provided technical comments on the report, but neither specifically commented on our recommendation. We plan to track the extent to which progress has been made to address our recommendation. Information technology investment management: OMB reported that in fiscal year 2011, there were approximately 7,200 information technology (IT) investments totaling at least $79 billion. OMB provides guidance to agencies on how to report on their IT investments and requires agencies to identify each investment by a single functional category and sub- category. These categorizations are intended to enable OMB and others to analyze investments with similar functions, as well as identify and analyze potentially duplicative investments across agencies. We found that DOD and the Department of Energy (DOE) need to address potentially duplicative IT investments to avoid investing in unnecessary systems. In February 2012, we completed a review that examined the 3 largest categories of IT investments within DOD, DOE, and the Department of Homeland Security (DHS) and found that although the departments use various investment review processes to identify duplicative investments, 37 of our sample of 810 investments were potentially duplicative at DOD and DOE. These investments account for about $1.2 billion in IT spending for fiscal years 2007 through 2012 for these two agencies. We found that DOD and DOE had recently initiated specific plans to address potential duplication in many of the investments we identified—such as plans to consolidate or eliminate systems—but these initiatives had not yet led to the consolidation or elimination of duplicative investments or functionality. In addition, while we did not identify any potentially duplicative investments at DHS within our sample, DHS officials have independently identified several duplicative investments and systems. DHS has plans to further consolidate systems within these investments by 2014, which it expects to produce approximately $41 million in cost savings. DHS officials have also identified 38 additional systems that they have determined to be duplicative. Further complicating agencies’ ability to identify and eliminate duplicative investments is that investments are, in certain cases, misclassified by function. For example, one of DHS’s Federal Emergency Management Agency (FEMA) investments was initially categorized within the Employee Performance Management sub-function, but DHS agreed that this investment should be assigned to the Human Resources Development sub-function. Proper categorization is necessary in order to analyze and identify duplicative IT investments, both within and across agencies. In February 2012, we recommended that the Secretaries of DOD and DOE direct their Chief Information Officers to utilize existing transparency mechanisms to report on the results of their efforts to identify and eliminate, where appropriate, each potentially duplicative investment that we identified, as well as any other duplicative investments. The agencies agreed with our recommendation. We also recommended that DOD, DOE, and DHS correct the miscategorizations of the investments we identified and ensure that investments are correctly categorized in agency submissions, which would enhance the agencies’ ability to identify opportunities to consolidate or eliminate duplicative investments. DOD and DHS agreed with our recommendation, but DOE disagreed that two of the four investments we identified were miscategorized, explaining that its categorizations reflect funding considerations. However, OMB guidance indicates that investments should be classified according to their intended purpose. Consequently, we believe the recommendation is warranted. Department of Homeland Security grants: From fiscal years 2002 through 2011, FEMA, under DHS, allocated about $20.3 billion to grant recipients through four specific programs (the State Homeland Security Program, Urban Areas Security Initiative, Port Security Grant Program, and Transit Security Grant Program) to enhance the capacity of states, localities, and other entities, such as ports or transit agencies, to prevent, respond to, and recover from a terrorism incident. We found that DHS needs better project information and coordination to identify and mitigate potential unnecessary duplication among four overlapping grant programs. In February 2012, we identified multiple factors that contributed to the risk of FEMA potentially funding unnecessarily duplicative projects across these four grant programs. These factors include overlap among grant recipients, goals, and geographic locations, combined with differing levels of information that FEMA had available regarding grant projects and recipients. We also reported that FEMA lacked a process to coordinate application reviews across the four grant programs and grant applications were reviewed separately by program and were not compared across each other to determine where possible unnecessary duplication may occur. Specifically, FEMA’s Homeland Security Grant Program branch administered the Urban Areas Security Initiative and State Homeland Security Program while the Transportation Infrastructure Security branch administered the Port Security Grant Program and Transit Security Grant Program. We and the DHS Inspector General have concluded that coordinating the review of grant projects internally would give FEMA more complete information about applications across the four grant programs, which could help FEMA identify and mitigate the risk of unnecessary duplication across grant applications. We also identified actions FEMA could take to identify and mitigate any unnecessary duplication in these programs, such as collecting more complete project information as well as exploring opportunities to enhance FEMA’s internal coordination and administration of the programs. We suggested that Congress may wish to consider requiring DHS to report on the results of its efforts to identify and prevent duplication within and across the four grant programs, and consider these results when making future funding decisions for these programs. Science, Technology, Engineering, and Math education programs: Federal agencies obligated $3.1 billion in fiscal year 2010 on Science, Technology, Engineering, and Mathematics (STEM) education programs. These programs can serve an important role both by helping to prepare students and teachers for careers in STEM fields and by enhancing the nation’s global competitiveness. In addition to the federal effort, state and local governments, universities and colleges, and the private sector have also developed programs that provide opportunities for students to pursue STEM education and occupations. Recently, both Congress and the administration have called for a more strategic and effective approach to the federal government’s investment in STEM education. For example, Congress directed the Office of Science and Technology Policy, within the Executive Office of the President, to establish a committee under its component National Science and Technology Council to, among other things, develop a 5-year governmentwide STEM education strategic plan and identify areas of duplication among federal programs.that strategic planning is needed to better manage overlapping programs across multiple agencies. In January 2012, we reported that 173 of the 209 (83 percent) STEM education programs administered by 13 federal agencies overlapped to some degree with at least 1 other program in that they offered similar services to target groups—such as K-12 students, postsecondary students, K-12 teachers, and college faculty and staff—to achieve similar objectives. efforts to both create and expand programs across many agencies in an effort to improve STEM education and increase the number of students going into related fields. Overlapping programs can lead to individuals and institutions being eligible for similar services in similar STEM fields offered through multiple programs. For example, 177 of the 209 programs (85 percent) were primarily intended to serve two or more target groups. Overlap can frustrate federal officials’ efforts to administer programs in a comprehensive manner, limit the ability of decision makers to determine which programs are most cost-effective, and ultimately increase program administrative costs. GAO, Science, Technology, Engineering, and Mathematics Education: Strategic Planning Needed to Better Manage Overlapping Programs across Multiple Agencies, GAO-12-108 (Washington, D.C.: Jan. 20, 2012). Even when programs overlap, the services they provide and the populations they serve may differ in meaningful ways and would therefore not necessarily be duplicative. There may be important differences between the specific STEM field of focus and the program’s stated goals. For example, we identified 31 programs that provided scholarships or fellowships to doctoral students in the field of physics. However, one program’s goal was to increase environmental literacy related to estuaries and coastal watersheds while another program focused on supporting education in nuclear science, engineering, and related trades. In addition, programs may be primarily intended to serve different specific populations within a given target group. Of the 34 programs providing services to K-12 students in the field of technology, 10 are primarily intended to serve specific underrepresented, minority, or disadvantaged groups and 2 are limited geographically to individual cities or universities. However, little is known about the effectiveness of federal STEM education programs. Since 2005, when we first reported on this issue, we have found that the majority of programs have not conducted comprehensive evaluations of how well their programs are working. Agency and program officials would benefit from guidance and information sharing within and across agencies about what is working and how to best evaluate programs. This would not only help to improve individual program performance, but could also inform agency- and governmentwide decisions about which programs should continue to be funded. Furthermore, although the National Science and Technology Council is in the process of developing a governmentwide strategic plan for STEM education, we found that agencies have not used outcome measures for STEM programs in a way that is clearly reflected in their own performance plans and performance reports—key strategic planning documents. The absence of clear links between the programs and agencies’ planning documents may hinder decision makers’ ability to assess how agencies’ STEM efforts contribute to agencywide performance goals and the overall federal STEM effort. We reported in January 2012 that numerous opportunities exist to improve the planning for STEM programs. For example, we recommended that the National Science and Technology Council develop guidance for how agencies can better incorporate governmentwide STEM education strategic plan goals and their STEM education efforts into their respective performance plans and reports, as well as determining the types of evaluations that may be feasible and appropriate for different types of STEM education programs. We also recommended that the National Science and Technology Council work with agencies, through the strategic planning process, to identify STEM education programs that might be candidates for consolidation or elimination. OMB stated that our recommendations are critical to improving the provision of STEM education across the federal government. In separate comments, the Office of Science and Technology Policy said its own analysis of STEM education programs identified no duplicative programs and where it identified overlapping programs it found that some program characteristics differed. As an illustration, the Office of Science and Technology Policy explained that there could be two STEM education programs, one that worked with inner city children in New York City and another with rural children in North Dakota. We agree that it may be important to serve both of these populations, but it is not clear that two separate administrative structures are necessary to ensure both populations are served. The Office of Science and Technology Policy said it would address our recommendations in the 5-year Federal STEM Education Strategic Plan, which will be released in spring 2012. Furthermore, the President’s Fiscal Year 2013 budget established STEM education programs as one of fourteen cross-agency priority goals. These goals are intended to enhance progress in areas needing more cross- government collaboration. Coordination of space system organizations: U.S. government space systems—such as the Global Positioning System (GPS) and space- based weather systems—provide a wide range of capabilities to a large number of users, including the federal government, U.S. businesses and citizens, and other countries. Space systems are usually very expensive, often costing billions of dollars to acquire. More than $25 billion a year is appropriated to agencies for developing space systems. These systems typically take a long time to develop, and often consist of multiple components, including satellites, ground control stations, terminals, and user equipment. Moreover, the nation’s satellites are put into orbit by rockets that can cost more than $100 million per launch. We have found that costs of space programs tend to increase significantly from initial cost estimates. A variety of agencies, such as the Federal Aviation Administration, the National Oceanic and Atmospheric Administration, and DHS rely on government space systems to execute their missions, but responsibilities for acquiring space systems are diffused across various DOD organizations as well as the intelligence community and the National Aeronautics and Space Administration. Fragmented leadership has led to program challenges and potential duplication in developing multi-billion dollar space systems. In some cases, problems with these systems have been so severe that acquisitions were either canceled or the needed capabilities were severely delayed. Fragmented leadership and lack of a single authority in overseeing the acquisition of space programs have created challenges for optimally acquiring, developing, and deploying new space systems. This fragmentation is problematic not only because of a lack of coordination that has led to delays in fielding systems, but also because no one person or organization is held accountable for balancing governmentwide needs against wants, resolving conflicts and ensuring coordination among the many organizations involved with space acquisitions, and ensuring that resources are directed where they are most needed. For example, we reported in April 2009 that the coordination of GPS satellites and user equipment segments is not adequately synchronized due to funding shifts and diffuse leadership in the program, likely leading to numerous years of missed opportunities to utilize new capabilities. DOD has taken some steps to better coordinate the GPS segments by creating the Space and Intelligence Office within the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics and conducting enterprise level reviews of the GPS program. However, DOD has not yet established a single authority responsible for ensuring that all GPS segments, including user equipment, are synchronized to the maximum extent practicable. DOD has also undertaken a number of initiatives to improve leadership over defense space acquisitions, but these actions have not been in place long enough to determine whether acquisition outcomes will improve. Moreover, the initiatives do not extend to the space activities across the government. We and others, including the Commission to Assess United States National Security Space Management and Organization, have previously recommended a number of changes to the leadership of the space community and have consistently reported that a lack of strong, centralized leadership has led to inefficiencies and other problems. But the question as to what office or leadership structure above the department level would be effective and appropriate for coordinating all U.S. government space programs and setting priorities has not been addressed. We have suggested that OMB work with the National Security Council to assess whether a governmentwide oversight body for space acquisitions is needed. OMB agreed that coordinating space activities across the U.S. government has been and continues to be a major challenge, but is concerned that our recommendation would add an extra layer of space bureaucracy on top of ongoing coordination efforts as well as additional costs and possible confusion regarding roles and authorities among the existing mechanisms. We believe that the recommendation is sufficiently flexible to allow for an implementation approach that would address these concerns. Defense Language and Culture Training: DOD has emphasized the importance of developing language skills and knowledge of foreign cultures within its forces to meet the needs of current and future military operations and it has invested millions of dollars to provide language and culture training to thousands of servicemembers, including those deploying to ongoing operations. For example, we estimated that DOD invested about $266 million for fiscal years 2005 through 2011 to provide general purpose forces with training support, such as classroom instruction, computer-based training, and training aids. We found that DOD has not developed an integrated approach to reduce fragmentation in the military services’ language and culture training approaches and overlap in the content of training products acquired by the military services and other organizations. In May 2011, we reported that language and culture training within DOD is not provided through a single department- or servicewide program, but rather multiple DOD organizations oversee the development and acquisition of language and culture training and related products and deliver training.Secretary of Defense for Personnel and Readiness establish internal mechanisms to assist the department in reaching consensus with the military services and other DOD entities on training priorities, synchronize the development of service- and departmentwide plans with the budget process, and guide efforts to monitor progress. DOD agreed with our recommendation. We recommended that the Office of the Under We also found that the military services have not fully coordinated efforts to develop and acquire language and culture training products. As a result, the services have acquired overlapping and potentially duplicative products, such as reference materials containing country- or region- specific cultural information and computer software or web-based training programs that can be used within a distributed learning training environment. To illustrate, we analyzed 18 DOD language and culture training products and found that the content overlapped to some extent with at least one other training product. For Afghan languages, DOD invested in at least five products that were intended to build basic foreign language skills or specific language skills needed to perform military tasks. We suggested that the Office of the Under Secretary of Defense for Personnel and Readiness and the military services designate organizational responsibility and a supporting process to inventory and evaluate existing language and culture products and plans for additional investments, eliminate any unnecessary overlap and duplication, and adjust resources accordingly, as well as take steps to develop and contract for new products that can be used by more than one military service. DOD agreed that departmentwide coordination efforts could be improved and noted that our analysis would be useful in targeting specific areas for improvement. Federal facility risk assessments: Federal facilities continue to be vulnerable to terrorist attacks and other acts of violence, as evidenced by the 2010 attacks on the IRS building in Austin, Texas, and the federal courthouse in Las Vegas, Nevada, which resulted in loss of life. DHS’s Federal Protective Service (FPS) is the primary federal agency responsible for providing physical security and law enforcement services—including conducting risk assessments—for the approximately 9,000 federal facilities under the control and custody of the General Services Administration. We found that agencies are making duplicate payments for facility risk assessments by completing their own assessments, while also paying DHS for assessments that the department is not performing. We reported in June 2008 and also have recently found that multiple federal agencies are expending additional resources to assess their own facilities; although, according to an FPS official, the agency received $236 million from federal agencies for risk assessments and other security services in fiscal year 2011.completed risk assessments based on concerns about risks unique to its mission for approximately 65 facilities that it also paid FPS to assess. Additionally, Environmental Protection Agency officials said that the agency has conducted its own assessments based on concerns with the quality and thoroughness of FPS’s assessments. These assessments are conducted by teams of contractors and agency employees, cost an estimated $6,000, and can take a few days to a week to complete. For example, an IRS official stated that IRS FPS’s planned risk assessment tool is intended to provide FPS with the capability to assess risks at federal facilities based on threat, vulnerability, and consequence; and track countermeasures to mitigate those risks, but it is unclear if the tool will help minimize duplication. According to an official, FPS planned to use its Risk Assessment and Management Program to complete assessments of about 700 federal facilities in fiscal year 2010 and 2,500 facilities in fiscal year 2011. However, as we reported in July 2011, FPS experienced cost overruns, schedule delays, and operational issues with developing this program and as a result the agency could not use it to complete risk assessments. since November 2009, the agency has only completed four risk assessments using its Risk Assessment and Management Program. We identified several steps that DHS could take to address duplication in FPS’s risk assessments. For example, in July 2011 we recommended that DHS develop interim solutions for completing risk assessments while addressing challenges with the Risk Assessment and Management Program. In addition, in February 2012, we suggested DHS work with federal agencies to determine their reasons for duplicating the activities included in FPS’s risk assessments and identify measures to reduce this duplication. DHS agreed with our July 2011 recommendation and has begun taking action to address it, but did not comment on the action we identified in February 2012. GAO, Federal Protective Service: Actions Needed to Resolve Delays and Inadequate Oversight Issues with FPS’s Risk Assessment and Management Program, GAO-11-705R (Washington, D.C.: July 15, 2011). Our 2012 annual report also summarized 19 areas—beyond those directly related to duplication, overlap, or fragmentation—describing other opportunities for agencies or Congress to consider taking action that could either reduce the cost of government operations or enhance revenue collection for the Treasury. These cost saving and revenue- enhancing opportunities also span a wide range of federal government agencies and mission areas (see table 2). Examples of opportunities for agencies or Congress to consider taking action that could either reduce the cost of government operations or enhance revenue collections include: Air Force food service: According to Air Force officials, most Air Force installations have their own individual contracts for food service, with a total cost of approximately $150 million per year for all Air Force installations. We found that the Air Force has opportunities to reduce its overall food service costs by millions of dollars annually by reviewing food service contracts and adjusting them, when appropriate, to better meet the needs of its installations, including aligning labor needs with the actual number of meals served by the dining facilities. The Air Force recently undertook an initiative to improve food service at six pilot installations, with intentions to eventually expand this initiative to more Air Force installations. Among other intended outcomes, Air Force officials stated that the first group of pilot installations achieved cost savings when compared to their previous contracts while also increasing hours of operation in the dining facilities and serving an additional 500,000 meals per year. We compared the estimated amount of food service labor at the six pilot installations under prior contracts to the projected work schedules under the initiative and found that by adjusting staffing levels for contractor staff at dining facilities, the contractor reduced the total number of labor hours at five of the six pilot installations by 53 percent. For example, at one installation, the number of estimated labor hours decreased from approximately 2,042 hours per week to 920. For the sixth installation where the labor hours did not decrease, the Air Force Audit Agency had recently conducted a review that found that the number of food service personnel did not align with workload estimates. As a result, the Air Force renegotiated its workload estimates and pay rates, resulting in savings of approximately $77,000 annually. During our review, we discussed the potential opportunity for achieving additional savings by reviewing staffing levels at other installations outside of the initiative with Air Force officials. As a result, the Air Force issued a memorandum directing a review of existing food service contracts to determine if the contracts meet current mission needs. The memorandum indicated that special attention must be given to whether the food service contract workload estimates were properly aligned with the actual number of meals served. In July 2011, we recommended that the Secretary of the Air Force monitor the actions taken in response to the direction to review food service contracts, and take actions, as appropriate, to ensure that cost-savings measures are implemented. According to Air Force officials, eight installations have recently reviewed and renegotiated their food service contracts for a total savings of over $2.5 million per year. The potential exists for other installations that rely on contracts to meet their food service needs to achieve similar financial benefits. For example, the Air Force has requested that each of its installations conduct a 100 percent review of existing food service contracts to determine if their current contract workload estimates meet current mission needs or if the contracts require modification. In addition, the Office of the Secretary of Defense planned to share the results of the Air Force’s review of its food service labor costs to achieve cost savings with the other military services. Navy information technology network: In 2007, the Navy established the Next Generation Enterprise Network program (NGEN) to replace and improve the Navy Marine Corps Intranet. According to the President’s fiscal year 2012 budget request, the NGEN program has spent about $434 million on work associated with the transition from the Navy Marine Corps Intranet. The Navy estimated that NGEN would cost approximately $50 billion to develop, operate, and maintain through fiscal year 2025. We found that better informed decisions were needed to ensure a more cost- effective acquisition approach for the Navy’s NGEN program. We reported in March 2011 that the Navy selected an approach that was not considered as part of its analysis of alternatives and that it estimated would cost at least $4.7 billion more than any of the four assessed alternatives.also did not provide a reliable basis for program execution because it did not adequately satisfy key schedule estimating best practices, such as establishing the critical path (the sequence of activities that, if delayed, impacts the planned completion date of the project) and assigning resources to all work activities. We also found that the Navy’s acquisition decisions were not always performance- or risk-based. In particular, senior executives approved the NGEN program’s continuing progress in the face of known performance shortfalls and risks. In addition, we reported that the Navy’s schedule for NGEN To address these weaknesses, we recommended in March 2011 that the Navy limit further investment in NGEN until it conducts an immediate interim review to reconsider the selected acquisition approach. We also identified an additional action that the Navy could take to facilitate implementation of the approach resulting from this review by ensuring that the NGEN schedule reflects key schedule estimating practices and future program reviews and decisions fully reflect the program’s performance and exposure to risk. DOD agreed with our recommendation to ensure that future NGEN acquisition reviews and decisions fully reflect the state of the program’s performance and its exposure to risks. The department did not agree with our recommendation to reconsider its acquisition approach; however, the Navy is currently in the process of reviewing and making changes to the NGEN acquisition strategy. We are undertaking work that will assess the extent to which the Navy has conducted its interim review to reconsider its acquisition approach and evaluate the revised strategy. DOD health care costs: DOD spends billions of dollars annually on its worldwide healthcare system. Currently, health care costs constitute nearly 10 percent of DOD’s baseline budget request. For its fiscal year 2012 budget, according to DOD documentation, DOD received $52.7 billion to provide health care to approximately 9.6 million active duty servicemembers, reservists, retirees, and their dependents. DOD recognizes that it must address the rate at which health care costs are rising and has stated that it intends to continue to develop health care initiatives that will improve the quality and standard of care, while reducing growth in overall costs. Our ongoing work has found that DOD has identified 11 initiatives intended to slow the rise in its health care costs, but it has not fully applied results-oriented management practices to its efforts or an overall monitoring process, which limits its effectiveness in implementing these initiatives and achieving related cost savings goals. DOD’s initiatives consist primarily of changes to clinical and business practices in areas ranging from primary care to psychological health to purchased care reimbursement practices. Partly in response to our ongoing work assessing DOD’s management of its initiatives, the department has taken some initial steps toward managing their implementation by developing a number of high-level, non-monetary metrics and corresponding goals for each strategic initiative, and other management tools, such as implementation plans that will include key elements such as investment costs and savings estimates. However, DOD currently has completed only one implementation plan, which contains the one available cost savings estimate among all the initiatives. Without completing its plans and incorporating elements such as problem definitions, resources needed, goals, performance measures, and cost estimates into them, DOD will not be fully aware if these initiatives are achieving projected cost savings and other performance goals. In addition, DOD has not completed the implementation of an overall monitoring process across its portfolio of initiatives for overseeing the initiatives’ progress or identified accountable officials and their roles and responsibilities for all of its initiatives. DOD’s 2007 Task Force on the Future of Military Health Care noted that the current Military Health System does not function as a fully integrated health care system.example, while the Assistant Secretary of Defense for Health Affairs controls the Defense Health Program budget, the services directly supervise their medical personnel and manage their military treatment facilities. Therefore, as Military Health System leaders develop and implement their plans to control rising health care costs, they will need to work across multiple authorities and areas of responsibility. Until DOD fully implements a military-wide mechanism to monitor progress and identify accountable officials, including their roles and responsibilities across its portfolio of initiatives, DOD may be hindered in its ability to achieve a more cost-efficient military health system. In order to enhance its efforts to manage rising health care costs and demonstrate sustained leadership commitment for achieving the performance goals of the Military Health System’s strategic initiatives, we plan to recommend as part of our ongoing work that DOD complete and fully implement detailed implementation plans for each of the approved health care initiatives in a manner consistent with results-oriented management practices, such as the inclusion of upfront investment costs and cost savings estimates; and complete the implementation of an overall monitoring process across its portfolio of initiatives for overseeing the initiatives’ progress and identifying accountable officials and their roles and responsibilities for all of its initiatives. We believe that DOD may realize projected cost savings and other performance goals by taking these actions to help ensure the successful implementation of its cost savings initiatives. Given that DOD identified these initiatives as steps to slow the rapidly growing costs of its medical program, if implemented these initiatives could potentially save DOD millions of dollars. DOD generally agreed with our planned recommendations. Excess uranium inventories: DOE maintains large inventories of depleted and natural uranium that it no longer requires for nuclear weapons or fuel for naval nuclear propulsion reactors. We reported in March and April 2008 and again in June 2011 that under certain conditions, the federal government could generate billions of dollars by marketing inventories of excess uranium to commercial power plants to use in their reactors. Specifically, we identified options that DOE could take to market the excess uranium inventories for commercial use. For example, DOE could contract to re-enrich inventories of depleted uranium hexafluoride (a by- product of the uranium enrichment process), consisting of hundreds of thousands of metric tons of material that are stored at DOE’s uranium enrichment plants. Although DOE would have to pay for processing, the resulting re-enriched uranium could be potentially sold if the sales price of the uranium exceeded processing costs. DOE could also pursue an option of selling the depleted uranium inventory “as-is”. This approach would require DOE to obtain the appropriate statutory authority to sell depleted uranium in its current unprocessed form. Firms such as nuclear power utilities and enrichment companies might find it cost effective to purchase the uranium and re-enrich it as a source of nuclear fuel. If executed in accordance with federal law, DOE sales of natural uranium could generate additional revenue for the government. Natural uranium on its own cannot fuel nuclear reactors and weapons. Rather, it is shipped to a conversion facility, where it is converted for the enrichment process. We reported in September 2011 that in 7 transactions executed since 2009 DOE has, in effect, sold nearly 1,900 metric tons of natural uranium into the market, using a contractor as a sales agent, to fund environmental cleanup services. DOE characterized these sales as barter transactions—exchanges of services (environmental cleanup work) for materials (uranium). While DOE received no cash directly from the transactions, it allowed its contractor to keep cash from the sales, which DOE would otherwise have owed to the United States Treasury. Because federal law requires an official or agent of the government receiving money for the government from any source to deposit the money in the Treasury, we found that these transactions violated the miscellaneous receipts statute. We have reported that congressional action may be needed to overcome legal obstacles to the pursuit of certain options for the sale of depleted and natural uranium. Specifically, our March 2008 report suggested that Congress may wish to explicitly provide direction about whether and how DOE may sell or transfer depleted uranium in its current form. Our September 2011 report suggested that if Congress sees merit in using the proceeds from the barter, transfer, or sale of federal uranium assets to pay for environmental cleanup work, it could consider providing DOE with explicit authority to barter excess uranium and to retain the proceeds from these transactions. We also suggested that Congress could direct DOE to sell uranium for cash and make those proceeds available by appropriation for environmental cleanup work. Congress has taken some actions in response to our work. For example, the Consolidated Appropriations Act, 2012, among other things, requires the Secretary of Energy to provide congressional appropriations committees with information on the transfer, sale, barter, distribution, or other provision of uranium in any form and an estimate of the uranium value along with the expected recipient of the material. The Consolidated Appropriations Act, 2012 also requires the Secretary to submit a report evaluating the economic feasibility of re-enriching depleted uranium. Medicare and Medicaid fraud detection systems: We have designated Medicare and Medicaid as high-risk programs, in part due to their susceptibility to improper payments—estimated to be about $65 billion in fiscal year 2011. To integrate data about all types of Medicare and Medicaid claims and improve its ability to detect fraud, waste, and abuse in these programs, the Centers for Medicare and Medicaid Services (CMS) initiated two information technology programs—the Integrated Data Repository, which is intended to provide a centralized repository of claims data for all Medicare and Medicaid programs, and One Program Integrity, a set of tools that enables CMS contractors and staff to access and analyze data retrieved from the repository. According to CMS officials, the systems are expected to provide financial benefits of more than $21 billion by the end of fiscal year 2015. We found that CMS needs to ensure widespread use of technology to help detect and recover billions of dollars of improper payments of claims and better position itself to determine and measure financial and other benefits of its systems. We reported in June 2011 that CMS had developed and begun using both systems, but was not yet positioned to identify, measure, or track benefits realized from these programs. For example, although in use since 2006, the Integrated Data Repository did not have Medicaid claims data or information from other CMS systems that store and process data related to the entry, correction, and adjustment of claims due to funding and other technical issues. These data are needed to help analysts prevent improper payments. Program officials told us that they had begun incorporating these data in September 2011 and planned to make them available to program integrity analysts in spring 2012. Regarding Medicaid data, agency officials stated that they did not account for difficulties associated with integrating data from the various types and formats of data stored in disparate state systems or develop reliable schedules for its efforts to incorporate these data. In particular, program officials did not consider certain risks and obstacles, such as technical challenges, as they developed schedules for implementing the Integrated Data Repository. Lacking reliable schedules, CMS may face additional delays in making available all the data that are needed to support enhanced program integrity efforts. In addition, CMS had not trained its broad community of analysts to use the One Program Integrity system because of delays introduced by a redesign of initial training plans that were found to be insufficient. Specifically, program officials planned for 639 analysts to be using the system by the end of fiscal year 2010; however only 41—less than 7 percent—were actively using it as of October 2010. Because of these delays, the initial use of the system was limited to a small number of CMS staff and contractors. In updating the status of the training efforts in November 2011, although we did not validate these data, CMS officials reported that a total of 215 program integrity analysts had been trained and were using the system. However, program officials had not finalized plans and schedules for training all intended users. In June 2011, we recommended that CMS take a number of actions to help ensure the program’s success toward achieving the billions of dollars in financial benefits that program integrity officials projected, such as finalizing plans and reliable schedules for incorporating additional data into the Integrated Data Repository and conducting training for all analysts who are intended to use the One Program Integrity system. CMS agreed with our recommendations and identified steps the agency is taking to implement them. We plan to conduct additional work to determine whether CMS has addressed our recommendations and identified financial benefits and progress toward meeting agency goals resulting from the implementation of its fraud detection systems. Medicare Advantage: In fiscal year 2010, the federal government spent about $113 billion on the Medicare Advantage program, a private plan alternative to the original Medicare program that covers about a quarter of Medicare beneficiaries. CMS, the agency that administers Medicare, adjusts payments to Medicare Advantage plans based on the health status of each plan’s enrollees. The agency could achieve billions of dollars in additional savings by better adjusting for differences between Medicare Advantage plans and traditional Medicare providers in the reporting of beneficiary diagnoses. CMS calculates a risk score for every beneficiary—a relative measure of health status—which is based on a beneficiary’s demographic characteristics, such as age and gender, and major medical conditions. To obtain information on the medical conditions of beneficiaries in traditional Medicare, CMS generally analyzes diagnoses—numerically coded by providers into Medicare defined categories—on the claims that providers submit for payment. For beneficiaries enrolled in Medicare Advantage plans, which do not submit claims, CMS requires plans to submit diagnostic codes for each beneficiary. Analysis has shown that risk scores are higher for Medicare Advantage beneficiaries than for beneficiaries in traditional Medicare with the same characteristics. Medicare Advantage plans have a financial incentive to ensure that all relevant diagnoses are coded, as this can increase beneficiaries’ risk scores and, ultimately, payments to the plans. Many traditional Medicare providers are paid for services rendered, and providers have less incentive to code all relevant diagnoses. Policymakers have expressed concern that risk scores for Medicare Advantage beneficiaries have grown at a faster rate than those for traditional Medicare, in part because of differences in coding diagnoses. In 2005, Congress directed CMS to analyze and adjust risk scores for differences in coding practices, and in 2010, CMS estimated that 3.41 percent of Medicare Advantage risk scores were due to differences in diagnostic coding practices. It reduced the scores by an equal percentage, thereby saving $2.7 billion. We identified shortcomings in CMS’s method for adjusting Medicare Advantage payments to reflect differences in diagnostic coding practices between Medicare Advantage and traditional Medicare. CMS did not use the most recent risk score data for its estimates; account for the increasing annual impact of coding differences over time; or account for beneficiary characteristics beyond differences in age and mortality between the Medicare Advantage and traditional Medicare populations, such as sex, Medicaid enrollment status, and beneficiary residential location. We estimated that a revised methodology that addressed these shortcomings could have saved Medicare between $1.2 billion and $3.1 billion in 2010 in addition to the $2.7 billion in savings that CMS’s 3.41 percent adjustment produced. We expect that savings in future years will be greater. In January 2012, we recommended that CMS take action to help ensure appropriate payments to Medicare Advantage plans and improve the accuracy of the adjustment made for differences in coding practices over time. The Department of Health and Human Services characterized our results as similar to those obtained by CMS. User fees: User fees assign part or all of the costs of federal programs and activities—the cost of providing a benefit that is above and beyond what is normally available to the general public—to readily identifiable users of those programs and activities. Regularly reviewing federal user fees and charges can help the Congress and federal agencies identify opportunities to address inconsistent federal funding approaches and enhance user financing, thereby reducing reliance on general fund appropriations. The Chief Financial Officers Act of 1990 (CFO Act) requires agencies to biennially review their fees and to recommend fee adjustments, as appropriate; additionally, OMB Circulars No. A-11 and No. A-25 direct agencies to discuss the results of these reviews and any resulting proposals, such as adjustments to fee rates, in the CFO annual report required by the CFO Act. In 2011, we surveyed the 24 agencies covered by the CFO Act on their review of user fees. 21 of the 23 agencies that responded reported charging more than 3,600 fees and collecting nearly $64 billion in fiscal year 2010, but agency responses indicated varying levels of adherence to the biennial review and reporting requirements. The survey responses indicated that for most fees, agencies (1) had not discussed fee review results in annual reports, and (2) had not reviewed the fees and were inconsistent in their ability to provide fee review documentation. We found specific examples where a comprehensive review of user fees could lead to cost savings or enhanced revenues for the government. For example, reviewing and adjusting as needed the air passenger immigration inspection user fee to fully recover the cost of the air passenger immigration inspection activities could reduce general fund appropriations for those activities. International air passengers arriving in the United States are subject to an immigration inspection to ensure that they have legal entry and immigration documents. International air passengers pay the immigration inspection fee when they purchase their airline tickets, but the rate has not been adjusted since fiscal year 2002. In recent years, U.S. Immigration and Customs Enforcement and U.S. Customs and Border Protection, the agencies responsible for conducting inspection activities, have relied on general fund appropriations to help fund activities for which these agencies have statutory authority to fund with user fees. In fiscal year 2010, this amounted to over $120 million for the U.S. Customs and Border Protection alone. In September 2007, we recommended that the Secretary of Homeland Security report immigration inspection activity costs to ensure fees are divided between U.S. Immigration and Customs Enforcement and U.S. Customs and Border Protection according to their respective immigration inspection activity costs and to develop a legislative proposal to adjust the air passenger immigration inspection fee if it was found to not recover the costs of inspection activities. DHS agreed with our recommendations, but some of the recommendations remain unimplemented. In February 2012, we suggested that Congress may wish to require the Secretary of Homeland Security to fully implement these recommendations which would serve to help to better align air passenger immigration inspection fee revenue with the costs of providing these services and achieve cost savings by reducing the reliance on general fund appropriations. Similarly, we identified options for adjusting the passenger aviation security fee, a uniform fee on passengers of U.S. and foreign air carriers originating at airports in the United States. Passenger aviation security fees collected offset amounts appropriated to the Transportation Security Administration for aviation security. In recent years, several options have been considered for increasing the passenger aviation security fee. However, the fee has not been increased since it was imposed in February 2002. We suggested that Congress may wish to consider increasing the passenger security fee. Such an increase could further offset the need for appropriated funds to support civil aviation security costs from about $2 billion to $10 billion over 5 years. Tax gap: The financing of the federal government depends largely on the IRS’s ability to collect federal taxes every year, which totaled $2.34 trillion in 2010. For the most part, taxpayers voluntarily report and pay their taxes on time. However the size and persistence of the tax gap— estimated in 2012 for the 2006 tax year to be a $385 billion difference between the taxes owed and taxes IRS ultimately collected for that year— highlight the need to make progress in improving compliance by those taxpayers who do not voluntarily pay what they owe. Given that tax noncompliance ranges from simple math errors to willful tax evasion, no single approach is likely to fully and cost-effectively address the tax gap. A multifaceted approach to improving compliance that includes enhancing IRS’s enforcement and service capabilities can help reduce the tax gap. One approach we have identified is the expansion of third-party information reporting, which improves taxpayer compliance and enhances IRS’s enforcement capabilities. The tax gap is due predominantly to taxpayer underreporting and underpayment of taxes owed. At the same time, taxpayers are much more likely to report their income accurately when the income is also reported to IRS by a third party. By matching information received from third-party payers with what payees report on their tax returns, IRS can detect income underreporting, including the failure to file a tax return. Expanding information reporting to cover payments for services by all owners of rental real estate and to cover payments to corporations for services would improve payee compliance. The Joint Committee on Taxation estimated revenue increases of $5.9 billion over a 10-year period for just these two expansions. In our 2011 annual report, we suggested a wide range of actions for the Congress and the executive branch to consider such as developing strategies to better coordinate fragmented efforts, implementing executive initiatives to improve oversight and evaluation of overlapping programs, considering enactment of legislation to facilitate revenue collection and examining opportunities to eliminate potential duplication through streamlining, collocating, or consolidating efforts or administrative services. Our assessment of progress made as of February 10, 2012, found that 4 (or 5 percent) of the 81 areas GAO identified were addressed; 60 (or 74 percent) were partially addressed; and 17 (or 21 percent) were not addressed. Appendix I presents GAO’s assessment of the overall progress made in each area. We applied the following criteria in making these overall assessments for the 81 areas. We determined that an area was: “addressed” if all actions needed in that area were addressed; “partially addressed” if at least one action needed in that area showed some progress toward implementation, but not all actions were addressed; and “not addressed” if none of the actions in that area were addressed. As of February 10, 2012, the majority of 176 actions needed within the 81 areas identified by GAO have been partially addressed. Specifically, 23 (or 13 percent) were addressed; 99 (or 56 percent) were partially addressed; 54 (or 31 percent) were not addressed. We applied the following criteria in making these assessments. For legislative branch actions: “addressed,” means relevant legislation is enacted and addresses all aspects of the action needed; “partially addressed,” means a relevant bill has passed a committee, the House or Senate, or relevant legislation has been enacted, but only addressed part of the action needed; and “not addressed,” means a bill may have been introduced, but did not pass out of a committee, or no relevant legislation has been introduced. For executive branch actions: “addressed,” means implementation of the action needed has been completed. “partially addressed,” means the action needed is in development; started but not yet completed; and “not addressed,” means the administration and/or agencies have made minimal or no progress toward implementing the action needed. In addition to the actions reported above, Congress has held a number of hearings and OMB has provided guidance to executive branch agencies on areas that we identified that could benefit from increased attention and ongoing oversight. Since the issuance of our March 2011 report, we have testified numerous times on our first annual report and on specific issues highlighted in the report. Many federal efforts, including those related to protecting food and agriculture, providing homeland security, and ensuring a well trained and educated workforce, transcend more than one agency, yet agencies face a range of challenges and barriers when they attempt to work collaboratively. Both Congress and the executive branch have recognized this, and in January 2011, the GPRA Modernization Act of 2010 (the Act) was enacted, updating the almost two-decades-old Government Performance and Results Act. The Act establishes a new framework aimed at taking a more crosscutting and integrated approach to focusing on results and improving government performance. Effective implementation of the Act could play an important role in clarifying desired outcomes, addressing program performance spanning multiple organizations, and facilitating future actions to reduce unnecessary duplication, overlap, and fragmentation. The Act requires OMB to coordinate with agencies to establish outcome- oriented goals covering a limited number of crosscutting policy areas as well as goals to improve management across the federal government, and to develop a governmentwide performance plan for making progress toward achieving those goals. The performance plan is to, among other things, identify the agencies and federal activities—including spending programs, tax expenditures, and regulations—that contribute to each goal, and establish performance indicators to measure overall progress toward these goals as well as the individual contribution of the underlying agencies and federal activities. The President’s budget for fiscal year 2013 includes 14 such crosscutting goals. Aspects of several of these goals—including Science, Technology, Engineering, and Math Education, Entrepreneurship and Small Businesses, Job Training, Cybersecurity, Information Technology Management, Procurement and Acquisition Management, and Real Property Management—are discussed in our 2011 or 2012 annual report. The Act also requires similar information at the agency level. Each agency is to identify the various federal organizations and activities—both within and external to the agency—that contribute to its goals, and describe how the agency is working with other agencies to achieve its goals as well as any relevant crosscutting goals. OMB officials stated that their approach to responding to this requirement will address fragmentation among federal programs. The areas identified in our annual reports are not intended to represent the full universe of duplication, overlap, or fragmentation within the federal government, but we have conducted a systematic examination across the federal government to ensure that we have identified major instances of potential duplication, overlap, and fragmentation governmentwide by the time we issue our third annual report in 2013. Our examination involved a multiphased approach. First, to identify potential areas of overlap, we examined the major budget functions and sub-functions of the federal government as identified by OMB. This was particularly helpful in identifying issue areas involving multiple government agencies. Second, our subject matter experts examined key missions and functions of federal agencies—or organizations within large agencies—using key agency documents, such as strategic plans, agency organizational charts, and mission and function documents. This further enabled us to identify areas where multiple agencies have similar goals, or where multiple organizations within federal agencies are involved in similar activities. Next, we canvassed a wide range of published sources—such as congressional hearings and reports by the Congressional Budget Office, OMB, various government audit agencies, and private think tanks—that addressed potential issues of duplication, overlap, and fragmentation. We have work under way or planned in the coming year to evaluate major instances of duplication, overlap, or fragmentation that we have not yet covered in our first two annual reports. Identifying, preventing, and addressing unnecessary duplication, overlap, and fragmentation within the federal government is clearly challenging. These are difficult issues to address because they may require agencies and Congress to re-examine within and across various mission areas the fundamental structure, operation, funding, and performance of a number of long-standing federal programs or activities with entrenched constituencies. Implementing the Act—such as its emphasis on establishing priority outcome-oriented goals, including those covering crosscutting policy areas—could play an important role in clarifying desired outcomes, addressing program performance spanning multiple organizations, and facilitating future actions to reduce unnecessary duplication, overlap, and fragmentation. Continued oversight by Congress and OMB will also be critical. In conclusion Mr. Chairman, Ranking Member Cummings, and Members of the Committee, opportunities exist for the Congress and federal agencies to continue to address the identified actions needed in our 2011 and 2012 annual reports. Collectively, these reports show that, if the actions are implemented, the government could potentially save tens of billions of dollars annually. A number of the issues are difficult to address and implementing many of the actions identified will take time and sustained leadership. This concludes my prepared statement. I would be pleased to answer any questions you may have. Thank you. For further information on this testimony or our February 28, 2012, reports, please contact Janet St. Laurent, Managing Director, Defense Capabilities and Management, who may be reached at (202) 512-4300, or [email protected]; and Zina Merritt, Director, Defense Capabilities and Management, who may be reached at (202) 512-4300, or [email protected]. Specific questions about individual issues may be directed to the area contact listed at the end of each area summary in the reports. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. This appendix presents a summary of GAO’s assessment of the overall progress made in each of the 81 areas that we identified in our March 2011 report in which the Congress and the executive branch could take actions to reduce or eliminate potential duplication, overlap, and fragmentation or achieve other potential financial benefits. For each of the 34 areas related to duplication, overlap, or fragmentation that GAO identified, table 3 presents GAO’s assessment of the overall progress made in implementing the actions needed in that area. For each of the 47 areas where GAO identified cost saving or revenue enhancement opportunities, table 4 presents GAO’s assessment of the overall progress made in implementing the actions GAO identified. As noted above, table 4 presents GAO’s assessment of the overall progress made in addressing the 47 cost-saving and revenue-enhancing areas.
This testimony discusses our 2012 annual report, which presents 51 areas where programs may be able to achieve greater efficiencies or become more effective in providing government services by reducing potential duplication, overlap, or fragmentation in federal programs and activities. We have also continued to monitor developments in the 81 areas that we identified a year ago in the first report we issued in this series. Our 2011 follow-up report released today describes the extent to which progress has been made to address these areas. This testimony is based on our 2012 annual and 2011 follow-up reports. Specifically, it addresses: (1) federal programs or functional areas where unnecessary duplication, overlap, or fragmentation exists, as well as other opportunities for potential cost savings or enhanced revenues; (2) status of actions taken by Congress and the executive branch to address the areas we identified in our 2011 report; (3) aspects of the GPRA Modernization Act of 2010 that may contribute to addressing and preventing duplication, overlap and fragmentation among federal programs; and (4) our approach to identifying duplication or cost savings in federal programs and activities. We conducted our work in accordance with generally accepted government auditing standards or with our quality assurance framework, as appropriate. For issues where information is being reported on for the first time in this report, we sought comments from the agencies involved, and incorporated those comments as appropriate. In updating the actions we identified in the 2011 annual report, we asked the agencies involved and the Office of Management and Budget (OMB) for their review and incorporated comments as appropriate. We identified 51 areas in our 2012 annual report, including 32 areas of potential duplication, overlap, or fragmentation as well as 19 opportunities for agencies or Congress to consider taking action that could either reduce the cost of government operations or enhance revenue collections for the Treasury. These areas involve a wide range of government missions including agriculture, defense, economic development, education, energy, general government, health, homeland security, international affairs, science and the environment, and social services. Within and across these missions, the 2012 annual report touches on virtually all major federal departments and agencies. We expanded the scope of our work for this year’s report to focus on areas where a mix of federal approaches is used, such as tax expenditures, direct spending, and federal grant or loan programs. In our 2011 follow-up report, we assessed the extent to which Congress and the executive branch addressed the 81 areas—including a total of 176 actions—to reduce or eliminate unnecessary duplication, overlap, or fragmentation or achieve other potential financial benefits. As of February 10, 2012, Congress and the executive branch have made some progress in addressing the majority of the 81 areas we identified; however, additional steps are needed to fully implement the remaining actions. Specifically, our assessment found that all actions had been addressed in 4 areas, partially addressed in 60 areas, and not addressed in 17 areas. In addition, OMB has instructed agencies to consider areas of duplication or overlap identified in our 2011 report and by others in their fiscal year 2013 budget submissions and management plans. The OMB guidance also advised agencies to take a number of other steps to enhance efficiency, such as identifying and including in their budget submissions cost-saving efforts that will improve operational efficiency and taxpayers’ rate of return, including program integration, reorganizations within and between agency components, and resource realignment to improve public services. Under requirements established by the GPRA Modernization Act of 2010 (the Act). OMB is also required to coordinate with agencies to establish outcome-oriented goals covering a limited number of crosscutting policy areas as well as goals to improve management across the federal government, and develop a governmentwide performance plan for making progress toward achieving those goals. The President’s budget for 2013 includes 14 such crosscutting policy goals. Aspects of several of these goals—including Science, Technology, Engineering, and Math Education, Entrepreneurship and Small Businesses, Job Training, Cybersecurity, Information Technology Management, Procurement and Acquisition Management, and Real Property Management—are discussed in our March 2011 and February 2012 reports. The Act’s requirements provide a much needed basis for more fully integrating a wide array of potentially duplicative, overlapping, or fragmented federal activities as well as a cohesive perspective on the long-term goals of the federal government focused on priority policy areas. Opportunities exist for the Congress and federal agencies to continue to address the needed actions identified in our March 2011 and February 2012 reports. Collectively, these reports show that, if the actions are implemented, the government could potentially save tens of billions of dollars annually. Cost savings related to reducing or eliminating duplication, overlap, and fragmentation can be difficult to estimate because the portion of agency budgets devoted to certain programs or activities is often unclear, or needed information on program performance or costs is not readily available. In some cases, there is sufficient information to estimate potential savings or other benefits if actions are taken to address individual issues. In other cases, estimates of cost savings or other benefits would depend upon what congressional and executive branch decisions were made, including how certain of our recommendations are implemented. Nevertheless, considering the amount of program dollars involved in the issues we have identified, even limited adjustments could result in significant savings. Additionally, we have found that agencies can often realize other kinds of benefits, such as improved customer service and decreased administrative burdens.
Passenger and freight rail are part of a complex national transportation system for transporting people and goods. Currently, there are seven Class I railroads and over 500 short line and regional railroads operating in the United States. These railroads operate the nation’s freight rail system and own the majority of rail infrastructure in the United States. Railroads are the primary mode of transportation for many products, especially for such bulk commodities as coal and grain. In addition, railroads are carrying increasing levels of intermodal freight (e.g., containers and trailers), which travel on multiple modes and typically require faster delivery than bulk commodities. According to the Association of American Railroads (AAR), based on ton-miles, freight railroads carried about 43 percent of domestic intercity freight volume in 2009. In addition, according to DOT, the amount of freight rail is expected to continue to grow with a projected increase of nearly 22 percent by 2035. Intercity passenger rail service is primarily provided by Amtrak. Amtrak operates a 21,000-mile network, which provides service to 46 states and Washington, D.C., primarily over tracks owned by freight railroads. Federal law requires that freight railroads give Amtrak trains preference over freight transportation and, in general, charge Amtrak the incremental cost—rather than an apportioned cost— associated with the use of their tracks. Amtrak also owns about 650 route miles of track, primarily on the Northeast Corridor, which runs between Boston, Massachusetts, and Washington, D.C. Transportation may impose a variety of “external” costs that can result in impacts such as health and environmental damage caused by pollution. For example, in choosing to drive to work, a commuter may not take into account the car emissions’ contribution to local pollution, which may damage property or the health of others. Following are some negative effects of transportation: Greenhouse gas emissions, nitrogen oxide (NOX) and fine particulate matter, and other pollutants: Based on estimated data from the EPA, from 1990 through 2008, transportation greenhouse gas emissions increased 22 percent. Carbon dioxide (CO) is the primary greenhouse gas associated with the combustion of diesel (and other fossil fuels) and accounted for over 95.5 percent of the transportation sector’s greenhouse gas emissions. Based on 2008 data from the EPA, cars, light trucks, and freight trucks together contributed over 80 percent of the transportation sector greenhouse gas emissions (see fig. 1). While there are multiple approaches to address externalities in transportation, policies that provide incentives to shift traffic to rail can be appealing because they offer an option to address multiple externalities simultaneously by changing behavior to favor rail over other modes. For example, market-based policies that change the relative prices of the modes are likely going to be the most cost-effective. Policies such as increasing fuel taxes, imposing new fees such as a vehicle mile travel fee or a congestion charge, investing in increased capacity in one mode, or subsidizing travel in one mode can provide incentives to users to switch travel from one mode to another, and can achieve both a reduction in greenhouse gas emissions and alleviate congestion. Some stakeholders also believe that investing in rail may help to stimulate economic development. In order to obtain similar benefits without the goal of shifting traffic to rail, it might be necessary to introduce a suite of policies, each more directly targeted at a specific externality. For example, a congestion pricing policy may reduce traffic during peak travel times, but if it shifts traffic to nonpeak times, it may have a limited impact on overall emissions. Conversely, providing incentives to purchase more fuel efficient truck engines may do nothing to improve congestion or economic development. With respect to direct investment, the federal government typically has not provided extensive funding for freight rail or for intercity passenger rail outside of the Northeast Corridor between Boston, Massachusetts and Washington, D.C. In addition, according to Amtrak officials, funding has not been predictable, consistent, or sustained. However, recent legislation has increased the federal role and funding available for investment in intercity passenger and freight rail infrastructure. In 2008, PRIIA authorized the HSIPR program. The program is administered through DOT’s FRA, which has responsibility for planning, awarding, and overseeing the use of federal funds for the development of high-speed and intercity passenger rail. As of 2010, over $10 billion had been awarded through the HSIPR program to fund high-speed rail projects. Moreover, through the Recovery Act, Congress authorized the TIGER Discretionary Grant Program for investment in a variety of transportation areas, including freight and passenger rail. In 2010, DOT awarded over $2 billion in TIGER funding. The TIGER program was designed to preserve and create jobs and to promote economic recovery and investment in transportation infrastructure that will provide long-term economic benefits and assist those most affected by the current economic downturn. The TIGER grants are multimodal, and criteria were developed for a framework to assess projects across various modes. For more information on the HSIPR and TIGER programs, see appendix III. Decision makers may consider a number of factors in deciding between various alternative investments or policies. These factors may include the objective or goal of the proposed actions—for example, preserving and creating jobs or promoting economic recovery or reducing an environmental externality. Other factors, such as the benefits and costs of alternatives, are also important to consider in decision making. Some benefits are associated with reducing an externality and are part of the assessment of whether policy alternatives for addressing the externality can be justified on economic principles. Costs should also be accounted for when considering various investment or policy alternatives. For example, there are direct costs, such as construction, maintenance, and operations, and less obvious types of costs, such as delays and pollution generated during construction. There are tools that can be employed in evaluating proposed transportation alternatives, including benefit-cost analysis and economic impact analysis. Benefit-cost analysis is designed to identify the alternative with the greatest net benefit by comparing the monetary value of benefits and costs of each alternative with a baseline. Benefit-cost analysis provides for a comparison of alternatives based on economic efficiency, that is, which investment or policy would provide the greatest net benefit (i.e., greater benefits than costs). As we have previously reported, while benefit-cost analysis may not be the most important decision-making factor—but rather, one of many tools that decision makers may use to organize, evaluate, and determine trade-offs of various alternatives—the increased use of systematic analytical tools such as benefit-cost analysis can provide important additional information that can lead to better informed transportation decision making. Economic impact analysis is a tool for assessing how the benefits and costs of transportation alternatives would be distributed throughout the economy and for identifying groups in society (for example, by region, income, or race) that are likely to gain from, or bear the costs of, a policy. The use of benefit-cost analysis information is not consistent across modes or types of programs that provide funding to transportation projects. Competitive programs such as TIGER and HSIPR and loan guarantee programs such as TIFIA and RRIF require information on benefits and costs. Formula programs (such as the Federal-Aid Highway Program) do not necessarily require benefit-cost information. Federal guidance exists for conducting benefit-cost analyses, including OMB Circular No. A-94, OMB Circular No. A-4, and Executive Order No. 12893. The directive and related OMB guidance outline a number of key elements that should be included in the assessment of benefits and costs in decision making, as described in table 1. Specifically Executive Order No. 12893 and OMB Circulars Nos. A-94 and A-4 indicate that benefit and cost information shall be used in decision making, and the level of uncertainty in estimates of benefits and costs shall be disclosed. Other aspects of the benefit-cost analysis should be completed to the extent possible. For example, while the guidance suggests that impacts should be quantified and monetized, to the extent that this is not possible, qualitative assessments should be provided for those impacts that are not readily quantifiable. As we have previously reported in our work on transit investments, qualitative information can help ensure that project impacts that cannot be easily quantified are considered in decision making. Both the HSIPR and TIGER grant programs required applicants to provide information on proposed project benefits and costs. The type of information required, however, differed between the two programs and, for the TIGER program, depended on the level of federal funding sought, as described in table 2. In addition, while requirements for assessment of project benefits and costs were more specific for TIGER than for the HSIPR program, officials for both programs considered whether project benefits were likely to exceed project costs as part of their respective application assessment processes. In order to generate benefits—such as a decrease in the harmful effects of transportation-related pollution—through mode shift, a policy first has to attract sufficient rail ridership or rail freight demand from other modes that have higher harmful emissions. In practice, the extent to which rail can generate sufficient demand to draw traffic from other modes and generate net benefits will depend on numerous factors. In addition to mode shift, policies that produce price changes can prompt other economic responses in the short run, such as the use of lighter-weight materials or a shift toward more fuel-efficient vehicles; over the longer term, there is greater potential for responses that will shape the overall distribution and use of freight and passenger transportation services. For intercity passenger rail, factors such as high levels of population density, expected population growth along a corridor, and strong business and cultural ties between cities can lead to a higher demand for intercity passenger travel. In order for rail to be competitive with other transportation modes, it needs to be time- and price-competitive and have favorable service characteristics related to frequency, reliability, and safety. Further, high-speed rail has more potential to attract riders in corridors experiencing heavy intercity travel on existing modes of transportation—particularly where air transportation has high traffic levels and a large share of the market over relatively short distances—and where there is, or is projected to be, growth in congestion and constraints on the capacity of existing systems. For example, rail traffic in the densely populated Northeast Corridor is highly competitive with other modes, and Amtrak now has a 65 percent share of the air-rail market between Washington, D.C. and New York and a 52 percent share between New York and Boston. The potential for network effects are also an important factor in the level of traffic that may shift to rail, as more riders are attracted when the line is located where it can carry traffic to a wide number of destinations or connect to other modes. For example, local transit systems can serve as feeders to the success of intercity passenger rail operations. Passenger modes can also work as complements, if, for example, passenger rail service delivers passengers to airports. DOT has indicated where passenger rail generally competes with other modes. For example, for intercity distances of 100-600 miles, in corridors with moderate population densities, high-speed rail competes with auto and bus and at high population densities competes with air, as shown in figure 2. In freight markets, one mode may have a distinct comparative advantage over another for certain types of shipments, thereby limiting the potential for traffic to shift to rail. For example, carriage of bulk commodities (e.g., coal) relies almost entirely on rail and waterways, while carriage of high- value and very time-sensitive commodities is dominated by truck and aviation. Conversely, modes often work as complements to complete a shipment. Intermodal freight is designed to move on multiple modes, using a container that can be moved from a truck to a train to a ship without handling any of the freight itself when changing modes. In other cases, the modes may be substitutable for certain types of trips and will compete directly for shipments or for segments of shipments based on price and performance. For example, some long-haul trucking and rail shipments may be substitutable. DOT has produced some basic parameters that influence competition across the modes for freight, as shown in figure 3. The extent to which mode-shifting is possible in the United States is difficult to estimate and will largely be determined by the types of parameters discussed above, such as whether shipping is feasible by another mode (e.g., rail lines may not be available for some routes), or practical (e.g., sending heavy coal shipments long distance by truck or time-sensitive shipments by rail may not be practical), and by the relative prices and other service characteristics of shipping by different modes. To further explore the potential for mode shift, we used a computer model developed by DOT to simulate the short-term change in VMT resulting from a 50-cent increase in per-mile truck rates. We simulated two scenarios: one using the model’s default assumptions and one in which the assumptions pertaining to truck speed, reliability, and loss and damage were adjusted to make truck relatively more costly than rail. Under both scenarios, the 50-cent increase in truck rates (an increase of roughly 30 percent) resulted in less than a 1 percent decrease in truck VMT. Although both the default scenario and the alternative scenario produced similar estimates, these simulations are only suggestive, rather than definitive, of the impact that an increase in per-mile truck rates might have on VMT reduction. While the results of our simulation suggest that a 50-cent increase in per-mile truck rates would have a limited impact on diversion of freight from truck to rail, data limitations prevent us from making precise predictions with a high level of confidence. See appendix IV for a more detailed description of our modeling efforts, data quality issues, and a full list of assumptions in the model. In both the United States and in other countries we visited—where freight and passenger traffic generally share the same rail infrastructure— the potential benefits of a policy designed to shift freight traffic to rail are also affected by the amount of capacity available or planned on the rail network to accommodate a shift in traffic, as well as the capacity available or planned on competing transportation modes. For example, freight rail officials we met with in the United States indicated that in heavily congested corridors, such as in the Northeast, there is limited capacity available to accommodate both planned freight rail projects and proposed intercity passenger rail traffic, because the rail line is already congested. Plans for new dedicated high-speed rail lines would eliminate some of these capacity sharing issues and could potentially create the capacity needed to accommodate both freight and improved or expanded passenger service but must be weighed against the costs associated with constructing and maintaining new equipment and infrastructure, as well as acquiring rights of way for the track. Furthermore, significant investment and improvements to operations for highway infrastructure or airport infrastructure could offset the impact of policies designed to shift passenger or freight traffic to rail. For example, the FAA is currently pursuing modernization of the air transportation system to create additional capacity and efficiencies. If, as a result, flights become more efficient and travel times decrease, then travelers originally expected to shift to rail as a result of the implemented policy may not do so. In contrast, the existence of other policies in place concurrently may also be a contributing factor to improvements in environmental or congestion benefits, as separate policies may work together and lead to greater cumulative benefits. In either case, it can be difficult to distinguish the impact of a given policy due to these other factors. Following are descriptions of how shifting traffic to rail can address externalities and produce benefits, as well as some of the factors that affect the extent to which those benefits may materialize: Reduced greenhouse gas emissions and increased fuel efficiency: Rail emits fewer air emissions and is generally more fuel efficient than trucks. For example, a report by the American Association of State Highway and Transportation Officials (AASHTO) cites that the American Society of Mechanical Engineers estimates 2.5 million fewer tons of carbon dioxide would be emitted into the air annually if 10 percent of intercity freight now moving by highway were shifted to rail, if such traffic has the potential to shift. A recent study conducted by FRA comparing the fuel efficiency of rail to freight trucks calculated that rail had fuel efficiencies ranging from 156 to 512 ton-miles per gallon, while trucks had fuel efficiencies ranging from 68 to 133 ton-miles per gallon. According to Amtrak officials, their intercity passenger rail service has also been shown to be more energy efficient than air or passenger vehicle traffic. In addition, passenger and freight rail can be electrified to eliminate even current emissions generated by rail transport, as alternative power (e.g., hydro or nuclear) may be used to generate electric propulsion. For example, many of the routes in the United Kingdom are electrified, and efforts are under way to continue to electrify additional segments of the rail network in order to reduce emissions. While rail generally provides favorable emissions attributes and fuel efficiency in comparison with highway and air travel, there are many factors that could affect the extent to which environmental benefits are achieved. These factors may include the type of train equipment, the mix of commodities being transported, the length of the rail route versus the truck route for a given shipment, traffic volume, and capacity. In addition, if the current transportation system is not designed to facilitate rail transport, it may be necessary to invest in additional capital infrastructure or build new rail yards closer to urban areas, which could have additional environmental costs and may diminish the extent of potential net benefits. Furthermore, how transport system users respond to a given policy will also impact the extent to which the policy generates any benefits. For example, a policy that changes the price of road transport by tolling could result in a freight hauler responding by changing the load factor of existing road shipments by consolidating shipments or increasing return loads to decrease the number of empty return trips. A similar policy could also lead to reduced transport volumes due to reduced demand for the product being shipped. According to DOT officials, correctly pricing usage of the transportation system is an ongoing challenge, as incorrect pricing can lead to inefficiencies and misallocation of resources beyond what market conditions would otherwise allow. Other policies aside from mode shift can more directly target environmental externalities. More targeted policies—such as increasing fuel taxes or implementing a carbon pricing scheme—may encourage drivers to purchase more fuel-efficient vehicles or make fewer vehicle trips, without shifting significant traffic to rail. Congestion: Where passenger or freight rail service provides a less costly alternative to other modes—through more timely or reliable transport— individuals and shippers can shift out of more congested modes and onto rail, thus alleviating congestion. For certain goods, a train can generally carry the freight of 280 or more trucks, relieving congestion by removing freight trucks from the highways. Similarly, an intercity passenger train can carry many times more people than the typical passenger vehicle. Consequently, if fewer vehicle miles are traveled, then there is less wear and tear on the highways and less cost to the public for related repairs and maintenance. However, congestion relief will vary based on specific locations, times of day, types of trips being diverted to another mode, and the conditions of the corridors and areas where trips are being diverted. For freight, long-haul shipments might have the most potential to shift to rail, but diversion of these trips to rail, while removing trucks from certain stretches of highway, may do little to address problems at the most congested bottlenecks in urban areas. Similarly, Amtrak officials noted that aviation can provide travelers’ alternative options for travel in high- density corridors which may help relieve congestion at capacity- constrained airports. If high-speed rail can divert travelers from making an intercity trip through congested highway bottlenecks or airports at peak travel times, then there may be a noticeable effect on traffic. However, any trips on a congested highway corridor that are diverted to another mode of travel, such as rail, may at least partially be replaced by other trips through induced demand. For example, since congestion has been reduced on a highway, making it easier to travel, more people may respond by choosing to drive on that highway where faster travel times are available, limiting the relief in the long-run. Other policies can be implemented that are designed to more directly address congestion where it is most acute, such as congestion pricing (e.g., converting high-occupancy vehicle lanes to high-occupancy toll lanes) or other demand management strategies. Safety: While safety has improved across all transportation modes over time, both passenger and freight rail may have a comparative advantage over other modes. Shippers and passengers who use rail in lieu of other modes may accrue measurable safety benefits because rail traffic is, for the most part, separated from other traffic. Because most rail accidents— both injuries and fatalities—involve traffic at limited locations such as grade crossings or on railroad property, safety benefits can be expected when more traffic is moved via rail. On a per-mile basis, passenger and freight rail are substantially safer than cars or trucks. For example, according to Amtrak, there were 8 passenger fatalities between 2003 and 2007. In addition, in 2007 most freight accidents occurred on highways— over 6 million—as compared with rail, which accounted for approximately 5,400 accidents. Between 2003 and 2007, freight rail averaged 0.39 fatalities per billion ton-miles, compared with 2.54 fatalities per billion ton-miles for truck. There are a variety of policies and regulations that directly address safety concerns for each mode (e.g., safety standards and inspections for rail, vehicle safety features, etc.). for these calculations, while the Bureau of Transportation Statistics uses a different estimate of ton-miles. some cases, these types of impacts may reflect transfers of economic activity from one region to another and thus may not be viewed as benefits from a national perspective, or these impacts may already be accounted for through users’ direct benefits. As such, there is much debate about achieving these wider economic impacts and a number of challenges associated with assessing these types of impacts. While high-speed rail may have wider economic impacts, the impact varies greatly from case to case and is difficult to predict. Estimates of benefits vary, as one study has suggested that wider economic benefits would not generally exceed 10 to 20 percent of measured benefits, while an evaluation of another proposed high-speed rail line estimated these benefits to add 40 percent to direct benefits. There are a variety of other policies that could be implemented to help stimulate economic development without mode shift. Based on experience in the United Kingdom and Germany where decision makers made a concerted effort to move traffic from other modes to rail through pricing policies, targeted grants, and infrastructure investments, these policies resulted in varying amounts of mode shift. The full extent of benefits generated from these policies is ultimately uncertain, though benefits realized included environmental and efficiency improvements or localized congestion relief. Foreign rail officials told us it was difficult to determine the full extent of the benefits due to complicating factors (as described throughout the previous section). While some benefits were attained through implementation of policies designed to shift traffic to rail, these benefits were not necessarily achieved in the manner originally anticipated or at the level originally estimated. Furthermore, it is uncertain whether the benefits attained were achieved in the most efficient manner, or whether similar benefits could have been attained through other policies at a lower cost. Road freight pricing policies: In 2005, the German government implemented a Heavy Goods Vehicle (HGV) tolling policy on motorways to generate revenue to further upgrade and maintain the transportation system and to introduce infrastructure charging based on the “user pays” principle by changing the relative price of road transport relative to rail. The HGV tolling policy was also designed to provide an incentive to shift approximately 10 percent of road freight traffic to rail and waterways in the interests of the environment and to deploy HGVs more efficiently. According to German Ministry of Transport officials, while the HGV toll policy did not result in the amount of mode shift originally anticipated, some level of environmental benefits and road freight industry efficiency improvements were realized. These benefits are attributed to a more fuel- efficient HGV fleet making fewer empty trips. For example, officials told us that, in response to the tolling policy, trucking companies purchased more lower emission vehicles, which were charged a lower per-mile rate in order to decrease their toll. For the most part, German freight shipments continued to be made primarily on trucks, and trucks’ mode share has not changed appreciably since instituting the policy. Findings in a study conducted for the Ministry of Transport also indicated that transport on lower emission trucks has increased significantly, totaling 49 percent of all freight operations subject to tolls in 2009. According to German transport officials, the share of freight moved by rail has only slightly increased during the last decade. However, this increase cannot be clearly attributed to a particular policy tool, such as the HGV toll. Other countries have had similar experiences implementing pricing policies to provide incentives to shift traffic to rail. For example, the Swiss government implemented a HGV fee in 2001 on all roads to encourage freight traffic to shift from road to rail. This policy similarly resulted in improved efficiency because the trucking industry adapted its fleet and replaced some high emission vehicles with new lower emission vehicles. According to Swiss Federal Office of Transport documentation, HGV traffic through the Swiss Alps also decreased compared with what it would have been without introduction of the fee. However, to fully assess the magnitude of benefits of these types of tolling policies, these improvements would need to be weighed against the costs of implementing the policy, and this type of analysis has not been conducted. Freight rail operations and capital support: The United Kingdom’s Department for Transport uses two grant programs providing financial support for specific rail freight projects to encourage mode shift and provide congestion relief, based on the view that road freight generally does not pay its share of the significant external costs that it creates. The department’s Mode Shift Revenue Support scheme provides funding for operational expenses and the Freight Facilities Grant program supplements capital projects for freight infrastructure. The British government’s experience with these policies—which draw from a relatively small pool of annual funding and are intentionally designed to serve a targeted market—led to localized benefits for particular segments of the freight transport market in specific geographic locations such as congested bottlenecks near major ports. An evaluation of the Freight Facilities Grants program found that the program funding played an important role in developing or retaining rail freight flows, traditionally focused on bulk commodities. According to officials we met with, the grants from the Mode Shift Revenue Support scheme encourage mode shift principally for the economically important and growing intermodal container market and have been successful in reducing congestion on specific road freight routes because the program focuses on container flows from major ports (in which rail now has a 25 percent market share). These officials noted that, out of approximately 800,000 truck journeys removed from the road as a result of the grants from the Mode Shift Revenue Support scheme, between 2009 and 2010, 450,000 trucks were removed from England’s largest port—the Port of Felixstowe. Therefore, officials said the grants appear to have led to a decrease in truck traffic concentrated in specific locations for a particular segment of the freight transport industry. Intercity passenger rail infrastructure investments: Few postimplementation studies have been conducted to empirically assess the benefits resulting from investment in high-speed intercity passenger rail. Based on our previous work, some countries that have invested in new high-speed intercity passenger rail services have experienced discernable mode shift from air to rail where rail is trip-time competitive. For example, the introduction of high-speed intercity rail lines in France and Spain led to a decrease in air travel with an increase in rail ridership, and Air France officials estimated that high-speed rail is likely to capture about 80 percent of the air-rail market when rail journey times are between 2 and 3 hours. For example, with the introduction of the Madrid-Barcelona high-speed rail line in February 2008, air travel dropped an estimated 30 percent. In France, high-speed rail has captured 90 percent of the Paris-Lyon air-rail market. While discernible mode shift has been observed, the extent to which net benefits were achieved is unclear. Factors such as the proportion of traffic diverted from air or conventional rail versus newly generated traffic affect the extent of benefits. Furthermore, quantifying any resulting environmental benefits, such as reduced greenhouse gas emissions, or assessing the extent to which these benefits exceed the costs associated with developing these new high-speed rail routes is difficult. Some evaluations have been conducted in Spain and France and have indicated that net benefits were less than expected due to higher costs and lower than expected ridership, although, in France, the evaluations still found acceptable financial and social rates of return. Policies that provide incentives to shift passenger and freight traffic to rail offer the opportunity to attain a range of benefits simultaneously, but a variety of complicating factors can have a significant impact on the extent to which these benefits may be attained. In addition, if these policies are unable to generate the ridership or demand necessary to shift traffic from other modes to rail, the potential benefits may be further limited. While officials from some European countries we visited indicated that they have attained benefits from policies intended to shift traffic to rail, gains have been mixed, and the extent of benefits attained has depended on the specific context of policy implementation in each location, as the benefits realized are directly related to the particulars of each project. Furthermore, it is not always clear that the policy goals were feasible to begin with or that mode shift would have been the most cost-effective way to achieve the benefits sought. Some officials and stakeholders we met with told us that it is very difficult to attribute causation and draw conclusions regarding the effectiveness of transportation policy tools because so many factors are at play and may change simultaneously. In some cases, officials cannot determine the full extent of benefits or link impacts to a given policy with certainty, making it difficult for decision makers to know what to expect from future policies being considered or developed. In the next section, we look at two recent U.S. investment programs that awarded grant funding to freight and intercity passenger rail projects. Although neither of these programs were adopted for the specific purpose of shifting passenger or freight traffic to rail, both programs do seek to attain benefits, such an economic development and environmental benefits, by investing in rail. As previously noted, the degree to which benefits can be generated depends on a variety of factors, including the ability to attract riders or freight shipments either through mode shift or new demand. We discuss how applicants assessed the potential benefits and costs of their specific projects, based on the particular circumstances of each project, and the usefulness of those assessments for federal decision makers in making their investment decisions. According to DOT officials from both programs, as well as our assessment of 40 randomly selected rail-related TIGER and HSIPR applications, information on project benefits and costs submitted by applicants to the TIGER and HSIPR programs varied in both quality and comprehensiveness. While a small number of analyses of project benefits and costs were analytically strong—with sophisticated numerical projections of both benefits and costs and detailed information on their data and methodology—many others (1) did not quantify or monetize benefits to the extent possible, (2) did not appropriately account for benefits and costs, (3) omitted certain costs, and (4) did not include information on data limitations, methodologies for estimating benefits and costs, and uncertainties and assumptions underlying their analyses. First, the majority of applications we assessed contained primarily qualitative discussion of project benefits, such as potential reductions in emissions, fuel consumption, or roadway congestion, which could have been quantified and monetized. For instance, while 36 of the 40 applications we assessed included qualitative information regarding potential reductions in congestion, 20 provided quantitative assessments of these benefits, and 13 provided monetary estimates. This pattern was consistent across categories of benefits we assessed; however, some categories of impacts, such as safety and economic development, were even less frequently quantified. While federal guidelines, including Executive Order No. 12893, allow for discussion of benefits in a qualitative manner, they note the importance of quantifying and monetizing benefits to the maximum extent practicable. However, in some cases, certain categories of impacts may be more difficult to quantify than others and qualitative information on potential benefits and costs can be useful to decision makers. Second, common issues identified by DOT economists in the applications they assessed included failure to discount future benefits and costs to present values or failure to use appropriate discount rates, double counting of benefits, and presenting costs only for the portion of the project accounted for in the application while presenting benefits for the full project. Similarly, 33 of the 40 applications we assessed did not use discount rates as recommended in OMB Circular No. A-94 and OMB Circular No. A-4. Further, DOT economists who reviewed assessments of project benefits and costs contained in selected TIGER applications stated that many applicants submitted economic impact analyses—which are generally used to assess how economic impacts would be distributed throughout an economy but not for conducting benefit-cost analysis of policy alternatives. Economic impact analyses may contain information that does not factor into calculations of net benefits, such as tax revenue and induced jobs, and do not generally include information on other key benefits that would be accounted for in a benefit-cost analysis, such as emissions reduction or congestion relief. Applicants’ focus on economic impacts in their assessments of project benefits may have stemmed from additional funding criteria that DOT identified for both programs related to job creation and economic stimulus, as well as decision makers’ focus on these issues at the state and local levels. Third, important costs were often omitted from applications. In many cases, applicants would estimate a benefit, but not account for associated costs, such as increased noise, emissions, or potential additional accidents from new rail service. For instance, applicants often counted emission reduction benefits from mode shift to rail as a benefit but did not include corresponding increases in emissions from increased rail capacity and operation in their calculations of net benefits. Our assessments of TIGER and HSIPR applications found that of the applicants who projected potential safety or environmental benefits for their projects, only three applicants addressed potential safety costs, and only four applicants addressed potential environmental costs. Finally, we also found that analyses of benefits and costs in many applications consistently lacked other key data and methodological information that federal guidelines such as OMB Circular No. A-94 and OMB Circular No. A-4 recommend should be accounted for in analyses of project benefits and costs. Notably, the majority of the applications to the TIGER and HSIPR programs that we reviewed did not provide information related to uncertainty in projections, data limitations, and the assumptions underlying their models. While a small number of applications we assessed provided information in all of these areas, 31 out of 40 did not provide information on the uncertainty associated with their estimates of benefits and costs, 28 out of 40 did not provide information on the models or other calculations used to arrive at estimates of benefits and costs, and 36 out of 40 did not provide information on the strengths and limitations of data used in their projections. Furthermore, of those that did provide information in these areas, the information was generally not comprehensive in nature. For example, multiple applications provided information on the models or calculations used to quantify or monetize benefits, but did not do so for all the benefit and cost calculations included in their analysis. Applicants, industry experts, and DOT officials we spoke with reported that numerous challenges related to performing assessments of the benefits and costs of intercity passenger or freight rail projects can contribute to variation in the quality of assessments of project benefits and costs in applications to federal programs such as the TIGER and HSIPR programs. These challenges include (1) limited time, resources, and expertise for performing assessments of project benefits and costs; (2) a lack of clear guidance on standard values to use in the estimation of project benefits; and (3) limitations in data quality and access. These challenges impacted the usefulness of the information provided for decision makers, and, as a result, changes have been made or are being considered for future rounds of funding. Performing a comprehensive assessment of a proposed project’s potential benefits and costs is time and resource intensive and requires significant expertise. According to experts, a detailed and comprehensive benefit-cost analysis requires careful analysis and may call for specialized data collection in order to develop projections of benefits and costs. The short time frames for assembling applications for the TIGER and HSIPR programs—which were designed to award funds quickly in order to provide economic stimulus—may have contributed to the poor quality of many assessments. In addition, according to DOT officials, many applicants to the TIGER and HSIPR programs may not have understood what information to include in their analyses. The recent nature of federal requirements for state rail planning means that states are still building their capacity to perform complex analyses to assess rail projects and, in many cases, rail divisions within state departments of transportation are very small. State rail divisions often face funding and manpower issues since there is typically no dedicated state funding for rail services, and state transportation planning has historically focused more on highway projects. As a result, some applicants to competitive federal grant programs may have more capacity to perform assessments of project benefits and costs than others. For example, according to DOT officials, freight railroads have more resources to devote to developing models and estimating potential project benefits and costs. Standard values to monetize some benefits are not yet fully established, which can create inconsistency in the values used by applicants in their projections. While DOT has published guidance on standard estimates for the value of travel time and the value of a statistical life—which can be used to estimate the value of congestion mitigation efforts and safety improvements, respectively—values for other benefits are less clear. For instance, according to DOT officials, uncertainties associated with analyzing the value of time for freight shipment prevents DOT from issuing specific guidance in this area. In addition, there are substantial uncertainties associated with analyzing the value of many benefits, such as reduction in greenhouse gas emissions. While mode shift to rail may reduce pollution and greenhouse gas emissions, experts do not agree on the value to place on that benefit. DOT has issued guidance on values for use in calculating the social benefits of pollutant emissions, however according to modeling experts we interviewed, disagreement regarding how to value different benefits can lead some analysts to limit their assessments of benefits and costs to only that which can be monetized, while others may include all categories of benefits and costs in their assessment. As a result, some TIGER and HSIPR applicants may have used differing values to monetize projected benefits and costs, while others did not monetize benefits at all. Without clear guidance to applicants on preferred values for use in assessments of project benefits and costs, DOT decision makers may be hindered in their ability to compare the results of assessments of benefits and costs across projects or across modes. A standard set of values for key benefit categories may enable transportation officials to more readily compare projects and potentially place more weight on the results of assessments of project benefits and costs in their decision-making processes. According to DOT officials, historically lower levels of state and federal funding for rail compared with other modes of transportation have contributed to data gaps that impact the ability of applicants to project benefits and costs for both intercity passenger rail and freight rail projects. For instance, lack of data on intercity passenger travel demand made it difficult for some applicants to the HSIPR program to quantify potential benefits for some new high-speed rail lines. The lack of data may be related to cuts to federal funding for the Bureau of Transportation Statistics resulting in a decreased emphasis on the collection of rail-related data. Multiple state and association officials stated that previous state and national surveys of travel behavior did not capture traveler purposes for intercity travel and did not have a sufficient number of intercity traveler responses for use in travel modeling. In addition, lack of access to proprietary data on goods movement made it challenging for some applicants to the TIGER program to quantify benefits that might be associated with freight rail. According to officials from the California Department of Transportation (Caltrans), when performing analyses to estimate project benefits and costs, Caltrans employees had to manually count freight trains for a 24-hour period in order to gather data for use in their analyses. Furthermore, state transportation officials we spoke with indicated that the quality of data available for use in projecting benefits and costs of a project is often inconsistent. Officials we interviewed stated that data included in assessments of project benefits and costs are often from different years, contain sampling error, and may be insufficient for their intended use. These limitations lessen the reliability of estimates produced to inform transportation decision-making, as available data provide critical inputs for travel models. Modeling and forecasting limitations also make it harder to project shifts in transportation demand and related benefits and costs accurately. Benefit-cost analyses of transportation projects depend on forecasts of projected levels of usage, such as passenger rail ridership or potential freight shipments, in order to inform calculation of benefits and costs. Limitations of current models and data make it difficult to predict changes in traveler behavior, changes in warehousing and shipper behaviors for businesses, land use, or usage of nearby roads or alternative travel options that may result from a rail project. Since transportation demand modeling depends on information on traveler or shipper preferences in order to inform predictions, the lack of good intercity traveler and shipper demand data greatly impacts the quality of projections, particularly for new intercity passenger or freight rail service where no prior data exists to inform demand projections. As a result of the limitations described above, DOT officials stated that the assessments of benefits and costs provided by TIGER and HSIPR applicants were less useful to decision makers than anticipated. In general, the majority of rail-related applications we reviewed that were forwarded for additional consideration for the TIGER program contained assessments of project benefits and costs that were either marginally useful or not useful to DOT officials in their efforts to determine whether project benefits were likely to exceed project costs. Overall, 62 percent of forwarded rail-related applications had assessments of benefits and costs that were rated by DOT economists as “marginally useful” or “not useful,” and 38 percent had assessments that were rated as “very useful” or “useful” (see fig. 4). However, DOT officials noted that railroads generally did a better job with their benefit-cost analyses in their applications than other modes. While applicants to the HSIPR program were not required to conduct a benefit-cost analysis, the Federal Register notice for the program stated that information on benefits and costs provided by applicants would be used by DOT to conduct a comprehensive benefit-cost analysis for projects. However, according to FRA officials, the quality of the information provided prevented DOT from being able to use the information in this manner. While it is possible to offset the impact of the limitations described above and improve the usefulness of assessments of benefits and costs to decision makers by providing clear information on assumptions and uncertainty within analyses, as we stated above, very few TIGER and HSIPR applicants did so. Without information on projection methodologies and assumptions, DOT officials were not able to consistently determine how demand and benefit-cost projections were developed and whether the projections were reasonable. As a result, officials for both programs focused on simply determining whether project benefits were likely to exceed project costs, rather than a more detailed assessment of the magnitude of projects’ benefits and costs in relation to one another. See app. IV for a discussion of the challenges related to assumptions and uncertainty we encountered during our attempt to use a model to predict freight mode shift from truck to rail. The varying quality and focus of assessments of project benefits and costs included in both TIGER and HSIPR applications resulted in additional work for DOT officials in order for DOT to be able to determine whether project benefits were likely to exceed project costs. For example, DOT officials stated that DOT economists for the TIGER program spent 3 to 4 hours per application examining whether it contained any improper analysis techniques or other weaknesses, seeking missing information, and resolving issues in the analyses. For the HSIPR program, a DOT economist with subject matter expertise reviewed the demand forecasts provided by selected Track 2 applicants, devoting significant time to assess the level of risk the uncertainty in these projections was likely to pose to the ultimate success of the project. In order to improve the quality of applicant assessments of project benefits and costs, DOT economists identified limitations of the benefit- cost analyses submitted during TIGER I and used that information to develop guidance for TIGER II. In the Federal Register notice for TIGER II, DOT provided additional information to applicants regarding what should be included in assessments of project benefits and costs. This guidance included information on the differences between benefit-cost analysis and economic impact analysis, assessment of alternatives in relation to a baseline, discounting, forecasting, transparency and reproducibility of calculations, and methods of calculating various benefits and costs. As part of its guidance on assessing costs, DOT noted that applicants should use life-cycle cost analysis in estimating the costs of projects. For example, DOT guidance states that external costs, such as noise, increased congestion, and environmental pollutants resulting from construction or other project activities, should be included as costs in applicants’ analyses. Furthermore, applicants should include, to the extent possible, other costs caused during construction, such as delays and increased vehicle operating costs. FRA also plans to alter HSIPR requirements in order to increase the quality of information on project benefits and costs provided by future applicants. According to FRA officials, while applicants to the second round of HSIPR funding were presented with similar guidelines for assessing project benefits and costs as those provided in the first round, future HSIPR applicants will be required to provide more rigorous projections of ridership, benefits, and costs and to revise their assessments of project benefits and costs based on their improved ridership projections. Officials noted, however, that the process will be iterative and anticipated that models for the high-speed rail program will improve as domestic historical data on ridership becomes available over time. In addition, officials stated that FRA plans to take steps to encourage consistency in the methodologies grant applicants use to project demand, benefits, and costs. For instance, FRA is currently in the preliminary stages of developing a benefit-cost framework for states and localities, which represent the majority of applicants to programs such as TIGER and HSIPR, to use in assessing rail projects. Officials stated that FRA plans to issue guidance on performing assessments of benefits and costs for passenger rail projects when the framework is fully developed but did not provide a timeline for its development. While DOT officials for both programs have taken steps to improve the quality of benefit-cost information and associated analyses in the short term, other steps are necessary to improve quality over time. Some of these additional steps, such as developing historical data for intercity passenger rail demand, making improvements to forecasting and modeling, and increasing accessibility and quality of key data, may take more time. Nonetheless, improving the quality of benefit and cost information considered for programs such as TIGER and HSIPR could simplify the decision-making process and lend more credence to the merit of the projects ultimately selected for funding. Difficult and persistent problems face the U.S. transportation system today. Our system is largely powered by vehicles that use fossil fuels that produce harmful air emissions and contribute to climate change. Our existing infrastructure is aging and, in many places, is in a poor state of repair. Demand for freight and passenger travel will continue to grow, and the growing congestion in urban areas and at key bottlenecks in the system costs Americans billions of dollars in wasted time, fuel, and productivity each year. Adding to these problems, expanding or improving the efficiency of our existing road and air transportation networks has proven difficult, costly, and time-consuming. Both the HSIPR and TIGER programs provided a new opportunity to invest in rail—a mode that has historically been underrepresented in the U.S. transportation funding framework. Some see investment in rail infrastructure, along with other policies designed to shift traffic to rail, as important to addressing these problems, pointing to rail’s advantages over cars and freight trucks in terms of energy efficiency, safety, and lower emissions. While investments in rail or policies designed to shift traffic to rail may generate some benefits—as occurred to some degree in the United Kingdom and Germany—benefits must be weighed against direct project costs and other costs (e.g., noise) to determine whether an investment or policy produces overall net benefits. Further, close attention must be paid to the extent to which freight and passenger travel can actually shift to rail from other modes, given the choices available to, and the preferences of, travelers and shippers. While an assessment of benefits and costs is only one factor among many in decision making regarding these investments and policies, a decision maker’s ability to weigh information depends on the quality of benefit and cost information provided by project sponsors—regardless of whether this information is provided in a benefit-cost analysis or a more general discussion or enumeration of benefits and costs. We found that many TIGER and HSIPR applicants struggled to provide the benefit-cost information requested or to use appropriate values designated for their respective program. The lack of consistency and completeness in the benefit-cost information provided makes it more difficult for decision makers to conduct direct project comparisons or to fully understand the extent to which benefits are achievable and the trade-offs involved. While the shortened time frames of the programs and resource limitations among project sponsors were key causes of the varying quality of analyses, data limitations (including a lack of historical data—particularly with respect to high-speed rail), data inconsistencies, and data unavailability also accounted for some limitations in applicants’ benefit-cost information and will continue to impact these analyses in future funding rounds. Until data quality, data gaps, and access issues are addressed for the data inputs needed for rail modeling and analysis, projections of rail benefits will continue to be of limited use. In addition, almost no applicants discussed limitations in their analysis, including the assumptions made and levels of uncertainty in their projections. Only when assumptions and uncertainty are conveyed in assessments of benefits and costs can decision makers determine the appropriate weight to give to certain projections. To its credit, DOT has provided more explicit guidance to TIGER applicants in its second round of grant applications on how to meet federal benefit-cost analysis guidelines. While such guidance should result in improved quality of benefit-cost information provided for this program, this guidance neither ensures consistency across analyses in terms of common data sources, values, and models, nor will it have any impact on how benefits and costs are evaluated across programs that invest in other modes (such as the Federal-Aid Highway Program) which do not have a benefit-cost analysis requirement. Providing more standardized values for calculating project benefits and costs and developing a more consistent approach to assessing project benefits and costs so that proposed projects across modes may be more easily compared with one another can have numerous benefits. For instance, standardized values and a consistent approach allow for more confidence that projects and policies chosen will produce the greatest benefits relative to other alternatives, give more credence to investment decisions across programs and modes, and limit DOT officials’ need to invest time and resources in order to use the information as part of the decision making process. If benefit-cost considerations are ever to play a greater role, DOT will need to look at ways it can improve the quality and consistency of the data available to project sponsors. To improve the data available to the Department of Transportation and rail project sponsors, we recommend that the Secretary of Transportation, in consultation with Congress and other stakeholders, take the following two actions: Conduct a data needs assessment and identify which data are needed to conduct cost-effective modeling and analysis for intercity rail, determine limitations to the data used for inputs, and develop a strategy to address these limitations. In doing so, DOT should identify barriers to accessing existing data, consider whether authorization for additional data collection for intercity rail travel is warranted, and determine which entities shall be responsible for generating or collecting needed data. Encourage effective decision making and enhance the usefulness of assessments of benefits and costs, for both intercity passenger and freight rail projects by providing ongoing guidance and training on developing benefit and cost information for rail projects and by providing more direct and consistent requirements for assessing benefits and costs across transportation funding programs. In doing so, DOT should: Direct applicants to follow federal guidance outlined in both the Presidential Executive Order 12893 and OMB Circulars Nos. A-94 and A-4 in developing benefit and cost information. Require applicants to clearly communicate their methodology for calculating project benefits and costs including information on assumptions underlying calculations, strengths and limitations of data used, and the level of uncertainty in estimates of project benefits and costs. Ensure that applicants receive clear and consistent guidance on values to apply for key assumptions used to estimate potential project benefits and costs. We provided copies of our draft report to DOT, Amtrak and EPA for their review and comment. DOT provided technical comments and agreed to consider the recommendations. Amtrak and EPA provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Transportation, the Administrator of the Federal Railroad Administration, Amtrak, EPA, the Director of the Office of Management and Budget, and other interested parties. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. To better understand the potential net benefits of intercity passenger and freight rail, we examined (1) the extent to which transportation policy tools that provide incentives to shift passenger and freight traffic to rail may generate emissions, congestion, and economic development benefits and (2) how project benefits and costs are assessed for investment in intercity passenger and freight rail and how the strengths and limitations of these assessments impact federal decision making. We conducted interviews with the Department of Transportation (DOT), the Environmental Protection Agency (EPA), and Amtrak. We also interviewed representatives from transportation coalitions and associations, metropolitan planning organizations, state DOTs, and transportation consultants. Interviews with officials were in regards to methods to assess the benefits and costs of transportation investments and the limitations and challenges to assessing benefits. We also conducted interviews with officials from the High-Speed Intercity Passenger Rail (HSIPR), the Transportation Investment Generating Economic Recovery (TIGER), and Transportation Infrastructure Finance and Innovation Act (TIFIA) programs to gather insights into the usefulness of the cost-benefit study information in decision making. In addition to interviews with agency officials, interviews were conducted in three rail corridors (California, Midwest, and the Northeast) to ascertain additional information on challenges associated with conducting and communicating findings from benefit and cost assessments to decision makers. These interviews involved applicants and other corridor stakeholders who had applied to either or both the HSIPR and TIGER grant programs. Similarly, some of our interviews with organizations in the rail corridors included consultants such as Cambridge Systematics and Parsons Brinckerhoff which were involved in the development of studies for corridors. Following is table 3 with a list of selected organizations whose officials and representatives we interviewed. We reviewed our prior reports and documentation from an array of sources, including the DOT Inspector General, Congressional Research Service, and Congressional Budget Office. In addition, we identified studies through our interviews with stakeholders and conducted an extensive systematic search of literature published in the last 15 years. We reviewed this information to identify studies that analyzed the benefits and costs of intercity passenger and freight rail, mode shift to intercity passenger or freight rail, or the potential net benefits that could be attained through mode shift. In general, we did not find a sufficient number of available studies that adequately addressed our researchable questions, had an appropriate scope, or utilized empirically reliable methodologies. As a result, we used the studies and information we reviewed to inform the engagement as a whole and provided examples and illustrations of the potential costs and benefits that may be attained from policies that provide incentives to shift traffic to rail. In addition, we conducted case studies in the United Kingdom and Germany and asked officials to synthesize their experiences based on their professional judgment and data. Officials we met with also confirmed that it is difficult to causally link policy interventions to specific outcomes. We reviewed and assessed information on potential project benefits and costs included in selected applications to the HSIPR grant program and the Grants for TIGER grant program—20 applications from each grant program. We selected a nongeneralizable random sample of 40 applications from a larger pool of HSIPR and TIGER applications that we identified as including components related to intercity passenger rail or freight rail. For HSIPR, we included all applications submitted under Track 2 of the program, which focused on intercity passenger rail projects, in our selection pool, while for TIGER, we included all applications requesting more than $20 million that included components related to intercity passenger rail or freight rail in project descriptions provided by DOT. Twenty applications from each grant program were randomly selected for our review. The random sample of applications was weighted to ensure approximately proportional representation of applications from both programs that were awarded funding by DOT to those that were not awarded funding by DOT and, for the TIGER program, weighted to ensure approximately proportional representation of applications that were selected by DOT for additional review during DOT’s application review process to those that were not selected by DOT for additional review. Information pertaining to project benefits and costs in each of the 40 randomly selected applications was independently reviewed by two of our analysts based on Office of Management and Budget (OMB) guidelines for benefit-cost analysis and input from our economists and methodologists. Application information assessed by our analysts included whether benefits and costs related to congestion mitigation, emissions reduction, and economic development were assessed qualitatively, quantitatively, or were monetized. In addition, analysts identified whether applications included information on a number of key methodological elements identified by OMB and in our prior work. Any discrepancies in findings by the two analysts were reconciled for the final assessment. We conducted case studies of selected policies and programs in the United Kingdom and Germany to learn more about policies to address concerns about emissions, congestion and economic development. These two countries were chosen based on a number of criteria, including experience in implementing capacity enhancing and demand management policy tools in order to encourage mode shift to rail and attain potential benefits. We reviewed studies and reports on policy tools used in these countries and in the European Union. We interviewed officials from the United Kingdom’s Department for Transport and Germany’s Ministry of Transport, Building and Urban Development. In addition, we interviewed officials in the German Federal Ministry of Finance and Ministry for the Environment, Nature Conservation and Nuclear Safety, as well as the United Kingdom’s National Audit Office. We also met with representatives from rail industry organizations and rail companies and stakeholder groups from these countries. For more information, see appendix II. We conducted our own simulation of transportation policy scenarios on mode choice for freight shipments. Disaggregated data from the Freight Analysis Framework (FAF) was analyzed to obtain the distance traveled for shipments across commodity and truck types. Then this data from FAF, along with aggregated data on underlying assumptions, were used as inputs into the Intermodal Transportation and Inventory Cost Model (ITIC). This model estimates mode choices for each shipment under baseline conditions and various policy scenarios. See appendix IV for additional discussion of the simulations. We reviewed technical documentation associated with both of these models. We also conducted interviews with officials at DOT to better understand any data limitations or reliability issues with the model and data inputs. For more information see appendix IV. The United Kingdom’s Department for Transport sets the strategic direction for the railways and Network Rail owns and operates Britain’s rail infrastructure. Network Rail is a private corporation run by a board of directors and composed of approximately 100 members—some rail industry stakeholders and some members of the general public. Freight and passenger operators pay access charges to Network Rail for access to the rail tracks. In the United Kingdom, freight and passenger rail share many of the same tracks. The system is open to competition through passenger rail franchises and through “open access” provisions for freight and other new passenger services. The Department for Transport’s current approach to transportation policy planning emphasizes the assessment of a range of options driven by the desire to push transportation as a means to improve general economic performance, as well as environmental and societal goals. The Department for Transport plans and develops freight and intercity passenger rail projects based on a 5-year planning cycle, referred to as a Control Period. The last Control Period covering 2009-2014 resulted in plans to invest £6.6 billion (at 2010/2011 prices) in capacity enhancements for the passenger and freight rail system and strategic rail freight network. The 5-year cycle is intended to identify, develop, and prioritize policy interventions and investment decisions, reflecting the long-term nature of the transportation sector. The Department for Transport publishes High Level Output Specifications and Statements of Funds Available, reflecting what types of rail projects the government wants to buy based on the government’s transport goals and objectives and how much money it has to spend on those projects. Network Rail selects and implements projects to meet the High Level Output Specifications and outlines planned projects in a detailed delivery plan. All potential United Kingdom transportation projects are required to undergo standardized assessment processes to evaluate benefits and costs through the Web-based Transport Appraisal Guidance, which includes guidance on benefit-cost analysis for major transportation projects, including information on comparisons of proposed projects to alternatives, data sources for use in analyses, and methods for quantifying benefits and costs and performing sensitivity analysis. The Department for Transport has developed and implemented a range of policies to encourage a shift to rail transport. We explored some of these policies—in figure 5 below—during our site visits in the United Kingdom. Recent and planned high-speed rail projects (HS1 and HS2)—The Channel Tunnel Rail Link—referred to as HS1—is the United Kingdom portion of the route used by the Eurostar services from London to Paris and Brussels and was completed in 2007. The 109-kilometer Channel Tunnel Rail Link was the first major new railway to be constructed in the United Kingdom for over a century and the first high-speed railway. In 2009, the government began to develop plans for a new dedicated high- speed passenger rail line—HS2. The current government plans to begin a formal consultation process in 2011 and hopes to begin construction on the new high-speed line by 2015. Mode shift revenue support scheme—This program provides funding to companies for operating costs associated with shipping via rail or inland water freight instead of road. It is intended to facilitate and support modal shift, as well as generating environmental and wider social benefits from having fewer freight shipments on Britain’s roads. Freight facilities grants—These grants provide support for freight infrastructure capital projects such as rail sidings or loading and unloading equipment. Funding is granted on the principle that if the facilities were not provided, the freight in question would go by road. Applicants must predict the type and quantity of goods that will use the proposed facility and demonstrate that the freight facility will secure the removal of freight trucks from specific routes. The program has been available since the 1970s, and it has a long history of providing funding for capital infrastructure. In Germany, the Federal Ministry of Transport, Building and Urban Development (Ministry of Transport) is responsible for financing the development and maintenance of the country’s intercity passenger and freight rail network. Germany has the largest rail network in Europe, and both the intercity passenger and freight rail systems are open to competition. The majority of the rail system in Germany is managed by a single infrastructure provider—Deutsche Bahn. The German government provides Deutsche Bahn with approximately €3.9 billion a year in investment grants for infrastructure renewal, upgrades, and new projects; freight and passenger operators pay access charges to Deutsche Bahn for access to the rail tracks. In addition to serving as the railway infrastructure provider, Deutsche Bahn also provides much of the intercity passenger and freight logistics service in Germany. Passenger and freight rail usually share the same track in Germany which, according to German transport officials, can enhance the efficiency of the network. However, sharing the same network also impacts the overall capacity available to accommodate new passenger or freight traffic. The Ministry of Transport develops a Federal Transport Infrastructure Master Plan approximately every 10 years to set the long-term strategic policy direction for both passenger and freight transportation. These infrastructure plans describe projects required to cope with the forecast traffic development. The goals and objectives of these long-term plans are then translated into 5-year plans—Federal Transport Infrastructure Action Plans—which are then used to develop new projects. After determining short-term transportation priorities and developing action plans intended to align with long-term goals, all potential rail projects undergo standardized assessment processes to evaluate benefits and costs. As the primary infrastructure manager for the rail network in Germany, Deutsche Bahn maintains rail data sets that allow officials to generate consistent estimates of project benefits and costs with confidence, facilitated by centralized data collection. The rail infrastructure planning process is currently under way, and officials at the Ministry of Transport have just reviewed requirement plans for rail infrastructures projects—a process that occurs every 5 years—in order to complete and release an updated Action Plan. Germany’s Ministry of Transport has developed and implemented a range of policies that may encourage a shift to rail transport. We explored some of these policies—in figure 6 below—during our site visits in Germany. Upgrade and maintain the rail network—The German government has committed to investing annually in projects to upgrade and renew the existing high-speed and passenger rail network. Each year, the German government invests approximately €3.9 billion to renew the existing rail infrastructure and to construct, upgrade, or extend rail infrastructure. Vehicle mineral oil (fuel) tax—Between 1999 and 2003, the German government began to implement routine, annual increases in the vehicle fuel tax for the explicit purpose of curbing car use and promoting the purchase of more fuel-efficient vehicles. Diesel is now taxed at approximately 47 euro cents a liter, and gas is taxed at 65 euro cents a liter, generating approximately €39 billion in revenue in 2009 for the general tax fund. Heavy Goods Vehicle (HGV) tolls—Germany implemented a distance- based HGV toll in 2005, in part to support an explicit goal of shifting a portion of freight traffic to rail. The policy generated approximately €4.4 billion revenue in 2009, which was primarily used to maintain and upgrade the road network. This policy was viewed as imposing additional costs on the business community, and the new government has said it will not raise the toll rates or expand the tax to passenger vehicles in this legislative period. The American Reinvestment and Recovery Act of 2009 (Recovery Act) provided $8 billion to develop high-speed and intercity passenger rail service, funding the Passenger Rail Investments and Improvement Act (PRIIA), which was enacted in October 2008. The funding made available is significantly more money than Congress provided to fund rail in recent years. The Federal Railroad Administration (FRA) launched the high-speed and intercity passenger rail (HSIPR) program in June 2009 with the issuance of a notice of funding availability and interim program guidance, which outlined the requirements and procedures for obtaining federal funds. and in January 2010 FRA announced the selection of 62 projects in 23 states and the District of Columbia. FRA allowed applicants to the HSIPR program to submit applications to be evaluated under four funding tracks. See table 4 below. Applications were evaluated by technical evaluation panels against three categories of criteria: (1) public return on investment across categories of benefits including transportation benefits, economic recovery benefits, and other public benefits; (2) project success factors, such as project management approach and sustainability of benefits, as assessed by adequacy of engineering, proposed project schedule, National Environmental Policy Act compliance, and thoroughness of management plan; and (3) other attributes, such as timeliness of project completion. Projects were rated on a scale of 1 point to 5 points, with 1 point being the lowest, and 5 points being the highest, based on the fulfillment of objectives for each separate criterion. Using the best available tools, applicants were required to include benefit and cost information for the following three general categories of benefits: Transportation benefits, which include improved intercity passenger service, improved transportation network integration, and safety benefits; Economic recovery, which includes preserving and creating jobs (particularly in economically distressed areas); and Other public benefits, such as environmental quality, energy efficiency, and livable communities. Final project selections were made by the FRA Administrator building upon the work of the technical evaluation panels and applying four selection criteria specified in the Federal Register notice: (1) region/location, including regional balance across the country and balance among large and small population centers; (2) innovation, including pursuit of new technology and promotion of domestic manufacturing; (3) partnerships, including multistate agreements; and (4) tracks and round timing, including project schedules and costs. The Recovery Act also appropriated $1.5 billion for discretionary grants to be administered by DOT for capital investments in the nation’s surface transportation infrastructure. These grants were available on a competitive basis to fund transportation projects that would preserve and create jobs and provide long-term benefits, as well as incorporate innovation and promote public-private or other partnership approaches. In making awards, the legislation required DOT to address several statutory priorities, including achieving an equitable geographic distribution of the funds, balancing the needs of urban and rural communities, prioritizing projects for which a TIGER grant would complete a package of funding, and others. In December 2009 Congress appropriated $600 million to DOT for a “TIGER II” discretionary grant program, which was similar to the TIGER program’s structure and objectives. Eligible projects included highway or bridge projects, public transportation, passenger and freight rail projects, and port infrastructure projects. The TIGER program established three categories of project applications based on the amount of federal funding sought and three sets of criteria to determine grant awards in each project application category: Primary selection criteria: Long-term outcomes, such as state of good repair, evidence of long-term benefits, livability, sustainability, safety, and job creation and economic stimulus. Secondary selection criteria: Priority to projects that use innovative strategies to pursue long-term outcomes and those that demonstrate strong collaboration among a broad range of participants. Secondary selection criteria were weighted less than primary selection criteria in the application review process. Program-specific criteria: Program-specific information was used as a tie breaker to differentiate between similar projects. This information was only applied to projects in the following categories: bridge replacement, transit projects, TIGER-TIFIA payment projects, and port infrastructure projects. In general, quantifying benefits that may be attained through rail can be challenging, in part, because of data limitations. In order to both estimate the extent to which freight shipments might be diverted from truck to rail under various scenarios and identify challenges related to making such estimates, we conducted simulations using a computer model developed by DOT. We sought to estimate the number of diverted truck freight shipments under scenarios that increased the price or decreased the speed of freight shipments by truck as compared with rail. The Intermodal Transportation Inventory Cost (ITIC) model is a computer model for calculating the costs associated with shipping freight via alternative modes, namely truck and rail. The model can be used to perform policy analysis of issues concerning long-haul freight movement, such as diversion of freight shipments from truck to rail. DOT provides the ITIC model framework as a useful tool for ongoing policy studies, and shares the model, along with some internally developed data, for this purpose. We chose to use the ITIC model to simulate mode shift from truck to rail because of its federal origins and its direct applicability to freight shipments. The ITIC model—of which we used the highway freight to rail intermodal version—predicts diversion from truck to rail by assuming that shippers will select the mode of transportation with lower total shipment cost. The model replicates the decision-making trade-offs made by shippers in selecting which transportation mode to use for freight shipments. The model estimates the total cost—including both transportation and logistics costs—required to ship freight by both truck and rail for a given type of commodity and a given county-to-county route. Transportation costs include the costs associated with the actual movement of commodities, such as loading and unloading freight, and logistics costs represent a range of other costs, such as loss and damage of the freight, safety stock carrying cost, and capital cost on claims (see fig. 8 for the components of these costs). In order to estimate diversions of freight shipments from truck to rail, the ITIC model runs in two steps. First, the model establishes a baseline that can be used for comparison against each of the simulated scenarios. To do this, the ITIC model requires input data on actual truck freight shipments that it uses to calculate total cost to ship each type of commodity for each county-to-county pair for both truck and rail. After generating a base case, diversion of freight from truck to rail can be estimated for various scenarios by changing the input assumptions to the model. As these assumptions are changed, the model reestimates the transportation and logistics costs for both truck and rail and determines whether these estimated changes have made rail a lower cost option for any of the shipments that were originally sent by truck. The model assumes that shipments will switch from truck to rail if the total cost for making a shipment by rail is lower than the total costs for making a shipment by truck. A lack of reliable data for a number of major ITIC model inputs at the national level prevented us from fully assessing the uncertainty associated with estimates of freight diversion from truck to rail. As a result, we are unable to report on the confidence levels of the results of our simulations. The ITIC model is based on 26 inputs (see table 6 for a complete list of ITIC model inputs). For our national analysis, empirical data were available for 9 of the inputs; accordingly, we had to rely on the preprogrammed model assumptions for the remaining 17 inputs. Using these 26 inputs, the model made 24 calculations (see table 7 for full list of ITIC model calculations), 22 of which relied on at least one of the model’s 17 default assumptions (see table 5 below). To determine whether the available data and model assumptions were reliable for our purposes, we considered some important factors for assessing data reliability, including their relevance, completeness, accuracy, validity, and consistency. We found that the data and the basis for assumptions used in the ITIC model vary in terms of the following factors. Relevance: The 26 ITIC model inputs are relevant for the purposes of determining total transportation and logistics costs. These inputs have been shown to be conceptually important because they reflect economic theory underlying shipper choices, include a range of factors specified in the literature on freight shipments, and provide default assumptions based on theory and professional expertise. Completeness: Completeness refers to the extent that relevant records are present and the fields in each record are populated appropriately. We were unable to obtain complete national data for 20 ITIC model inputs. Of these 20 inputs, partial data were available for 3. For the remaining 17 inputs, we were unable to obtain any empirical data and consequently relied on the default assumptions that are provided in the model itself. However, without a reliable source of available data against which to judge the accuracy and validity of these assumed values, we could not determine how much uncertainty the assumptions added to any estimates produced by the model. Accuracy: Accuracy refers to the extent that recorded data reflect the actual underlying information. Of the 26 ITIC model inputs, we were unable to verify the accuracy for 20, including all 17 assumptions, as well as available truck rate data and 2 inputs (weight per cubic foot and value per pound of each commodity group) provided by FRA. FRA officials stated that they originally generated these input values using empirical data, but were unable to provide documentation of their analysis. We were therefore unable to judge the accuracy of the resulting data, or the level of uncertainty associated with estimates produced from FRA’s data. Validity: Validity refers to an input correctly representing what it is supposed to measure. Of the 26 ITIC model inputs, we were unable to verify the validity for 18, including all 17 default assumptions and available truck rate data. For the latter, we used the source of data previously used by the Federal Highway Administration, a proprietary collection of truck rates from 2006 for 120 city pairs. Documentation of the collection methods was unavailable, and we were not able to validate or assess the data for reliability, and thus could not estimate the uncertainty associated with per-mile truck rates. Because this value is a primary driver of total transportation and logistics costs, the uncertain reliability of truck rate data was a major limitation to using the model’s estimates. Consistency: Consistency is a subcategory of accuracy and refers to the need to obtain and use data that are clear and well defined enough to yield similar results in similar analyses. Of the 26 ITIC model inputs, we identified consistency issues for 7 data inputs. For example, truck rate data were collected in 2006, and data on truck shipments were from 2002, making it problematic to compare these figures. For the other 6 inputs, we encountered different levels of data aggregation for data that we had otherwise deemed reliable. For example, the FAF collects regional data, while the FRA lookup tables for certain truck and rail origin and destination miles are collected at a county level. In order to use both sources of data, the FAF data had to be disaggregated for use at the county level, and our disaggregation method adds additional uncertainty to our estimates. In order to better understand the impact of uncertainty in the ITIC model’s estimates caused by use of assumptions and data of questionable reliability, we examined how the model’s estimates change when key underlying assumptions were varied. In particular, we used the model to simulate the impact that a 50-cent increase in per-mile truck rates would have on vehicle miles traveled (VMT) under two scenarios: the first scenario uses the model’s default values for all assumptions, including truck speeds of 50 miles per hour, freight loss and damage as a percentage of gross revenue equal to 0.07 percent, and a reliability factor equal to 0.4; the second scenario changes these three assumptions to respective values of 40 miles per hour, 0.10 percent freight loss and damage, and reliability factor equal to 0.5. Each of these changes creates a higher total cost for trucks, potentially leading the model to predict some additional diversion to rail. However, for these sensitivity analyses, we are more concerned with the impact of changing truck rates under the alternative scenarios than we are with the individual impacts of changing assumptions. For a 50-cent increase (approximately 30 percent of per mile truck rates) in the first scenario, the model estimates a reduction in VMT of about 1.02 percent. For the same reduction in rates in the second scenario, the model estimates a reduction in VMT of about 1.04 percent. Figure 7 shows the estimated percentage reduction in VMT associated with increased per-mile truck rates for the two scenarios. Under either scenario, the impact of increasing per-mile truck rates by approximately 30 percent results in decreases of roughly 1 percent of VMT. This result suggests that we can have some degree of confidence that the model will consistently predict that changing per-mile truck rates will have a minor impact on total VMT traveled. In spite of the results of our two scenarios, the estimates of VMT diversion based on the ITIC model are still subject to limitations. As a result, these estimates are only suggestive, rather than conclusive, of the impact that an increase in per-mile truck rates might have on VMT reduction in actual policy scenarios. First, the issues of completeness, accuracy, validity, and consistency of our data negatively impact their reliability and increase the uncertainty of our estimates. Second, because of resource constraints, our analysis only varies 3 of the 17 default ITIC model assumptions and considers only one change in these values, instead of varying a larger number of assumptions for a wider range of scenarios (see table 6 for a full list of assumptions). Therefore, we cannot conclude that the model results are robust to all plausible variations in all of the model assumptions. Therefore, while the results of our simulation suggest that a 50-cent increase in per-mile truck rates would have a limited impact on diversion of freight from truck to rail in the short-term, we do not have enough confidence in the quality of data inputs to make precise predictions that would be reliable enough to inform policymaking decisions. Reliable data for model inputs would be necessary in order to produce estimates of changes in VMT with confidence. Sufficiently reliable data were not readily available for producing national estimates of mode shift under specific policy scenarios. As a result, it was necessary to rely on assumptions and data of undetermined reliability when conducting national simulations, which may result in unreliable estimates of freight diversion and an inability to fully quantify the uncertainty of the estimates produced. Our simulations suggest that a large increase (approximately 30 percent) in per-mile truck rates results could result in a relatively small (approximately 1 percent) decrease in VMT, even when multiple assumptions related to truck freight cost are changed. Despite this, limitations in the reliability of our data and ability to conduct further sensitivity analyses reduce our confidence in these estimates. While reliable data may be available at state and local levels for use in simulations of mode shift, the importance of communicating the uncertainty underlying projections to decision makers remains. Assessments of data reliability and assumptions, along with quantification of uncertainty, are necessary to enable the comparison of the risk of inaccurate results against the potential value of the estimates produced and would improve decision makers ability to reliably interpret these estimates and compare estimates across projects. In order to accomplish this and produce reliable estimates of freight diversion and uncertainty at the national level, it would be necessary to obtain complete, accurate, and valid data that are collected consistently for the model’s relevant inputs. In addition to the individual named above, Andrew Von Ah, Assistant Director; Mark Braza; Caroline Epley; Tim Guinane; Bert Japikse; Delwen Jones; Brooke Leary; Steven Putansu; Max Sawicky; Sharon Silas; and Maria Wallace made key contributions to this report.
Concerns about the weak economy, congestion in the transportation system, and the potentially harmful effects of air emissions generated by the transportation sector have raised awareness of the potential benefits and costs of intercity passenger and freight rail relative to other transportation modes such as highways. GAO was asked to review (1) the extent to which transportation policy tools that provide incentives to shift passenger and freight traffic to rail may generate emissions, congestion, and economic development benefits and (2) how project benefits and costs are assessed for investment in intercity passenger and freight rail and how the strengths and limitations of these assessments impact federal decision making. GAO reviewed studies; interviewed federal, state, local, and other stakeholders regarding methods to assess benefit and cost information; assessed information on project benefits and costs included in rail grant applications; and conducted case studies of selected policies and programs in the United Kingdom and Germany to learn more about their policies designed to provide incentives to shift traffic to rail. Although implementing policies designed to shift traffic to rail from other modes may generate benefits, and selected European countries' experiences suggest that some benefits can be achieved through these types of policies; many factors will affect whether traffic shifts. The extent to which rail can generate sufficient demand to draw traffic from other modes to achieve the desired level of net benefits will depend on numerous factors. Some passenger or freight traffic may not be substitutable or practical to move by a different mode. For example, certain freight shipments may be time-sensitive and thus cannot go by rail. Another key factor will be the extent to which sufficient capacity exists or is being planned to accommodate shifts in traffic from other modes. How transport markets respond to a given policy--such as one that changes the relative price of road transport--will also affect the level of benefits generated by that policy. Experiences in selected countries suggest that varying amounts of mode shift and some benefits were attained where decision makers implemented policies to move traffic from other modes to rail. For example, a road freight pricing policy in Germany resulted in environmental and efficiency improvements, and freight rail grants in the United Kingdom led to congestion relief at the country's largest port. Pursuing policies to encourage traffic to shift to rail is one potential way to generate benefits, and other policies may be implemented to generate specific benefits at a lower cost. Information on the benefits and costs of intercity passenger and freight rail is assessed to varying degrees by those seeking federal funding for investment in rail projects; however, data limitations and other factors reduce the usefulness of such assessments for federal decision makers. Applicants to two discretionary federal grant programs--the Transportation Investment Generating Economic Recovery program and the High-Speed Intercity Passenger Rail program--provided assessments of potential project benefits and costs that were generally not comprehensive. For instance, applications varied widely in the extent to which they quantified and monetized some categories of benefits. In addition, GAO's assessment of selected applications found that most applicants did not provide key information recommended in federal guidance for such assessments, including information related to uncertainty in projections, data limitations, or the assumptions underlying their models. Applicants, industry experts, and Department of Transportation (DOT) officials GAO spoke with reported that many challenges impacted their ability to produce useful assessments of project benefits and costs, including: short time frames in which to prepare the assessments, limited resources and expertise for performing assessments, poor data quality, lack of access to data, and lack of standard values for monetizing some benefits. As a result, while information on project benefits and costs was considered as one of many factors in the decision-making process, according to DOT officials, the varying quality and focus of assessments resulted in additional work, and the information provided was of limited usefulness to DOT decision makers. GAO recommends DOT conduct a data needs assessment to improve the effectiveness of modeling and analysis for rail and provide consistent requirements for assessing rail project benefits and costs. DOT, Amtrak and EPA provided technical comments, and DOT agreed to consider the recommendations.
JSF is a joint, multinational acquisition program for the Air Force, Navy, Marine Corps, and eight cooperative international partners. The program began in November 1996 with a 5-year competition between Lockheed Martin and Boeing to determine the most capable and affordable preliminary aircraft design. Lockheed Martin won the competition, and the program entered system development and demonstration in October 2001. The program’s objective is to develop and deploy a technically superior and affordable fleet of aircraft that support the warfighter in performing a wide range of missions in a variety of theaters. The single-seat, single- engine aircraft is being designed to be self-sufficient or part of a multisystem and multiservice operation, and to rapidly transition between air-to-surface and air-to-air missions while still airborne. To achieve its mission, the JSF will incorporate low observable technologies, defensive avionics, advanced onboard and offboard sensor fusion, and internal and external weapons. The JSF aircraft design has three variants: conventional takeoff and landing variant for the Air Force, aircraft carrier-suitable variant for the Navy, and short takeoff and vertical landing variant for the Marine Corps, the United Kingdom, and the Air Force. These aircraft are intended to replace aging fighter and attack aircraft currently in the inventory (see table 1). In 2004, DOD extended the JSF program schedule to address problems discovered during systems integration and the preliminary design review. Design efforts revealed significant airframe weight problems that affected the aircraft’s ability to meet key performance requirements. Software development and integration also posed a significant development challenge. Program officials delayed the critical design reviews, first flights of development aircraft, and the low-rate initial production decision to allow more time to mitigate design risk and gather more knowledge before continuing to make major investments. As a result, the initial operational capability date was delayed. DOD is in the process of reestablishing resource levels needed to deliver capabilities, given current and expected future conditions. The new business case will be presented to the Office of the Secretary of Defense (OSD) decision makers this spring. A key to successful product development is the formulation of a business case that matches requirements with resources—proven technologies, sufficient engineering capabilities, time, and funding—-when undertaking a new product development. First, the user’s needs must be accurately defined, alternative approaches to satisfying these needs properly analyzed, and quantities needed for the chosen system must be well understood. The developed product must be producible at a cost that matches the users’ expectations and budgetary resources. Finally, the developer must have the resources to design and deliver the product with the features that the customer wants and to deliver it when it is needed. If the financial, material, and intellectual resources to develop the product are not available, development does not go forward. If the business case measures up, the organization commits to the development of the product, including the financial investment. This calls for a realistic assessment of risks and costs; doing otherwise undermines the intent of the business case and invites failure. Program managers in organizations employing best practices are incentivized to identify risk early, be intolerant of unknowns, and be conservative in their estimates. Ultimately, preserving the business case strengthens the ability of managers to say no to pressures to accept high risks or unknowns. A key objective of the JSF acquisition program is to develop and produce fighter aircraft with greater capabilities and lower acquisition and ownership costs than previous fighter aircraft and to deliver the aircraft in time to replace DOD’s aging fleet. However, since the program began in 1996, several program decisions have resulted in increased program costs, reduced procurement quantities, and delayed delivery dates—making the original business case unexecutable. Continued program uncertainties about the aircraft redesign, software development, flight test program, and procurement quantities make it difficult to estimate the total amount of resources needed. Given the uncertainties, the program needs more time to gain knowledge before committing to a new, more accurate business case. The current pause to replan JSF development and production provides the program this opportunity. Finally, frequent changes in JSF program management, if continued, will compromise efforts to execute the business case agreements. Several significant changes to the JSF acquisition program have made DOD’s original business case unexecutable. Purchase quantities have been reduced by more than 500 aircraft, total program costs have increased by about $12 billion, and delivery of the aircraft has been delayed by about 2 years (see table 2 and app. IV for more details). These changes have effectively reduced DOD’s buying power for its investment, as it now plans to buy fewer aircraft with a greater financial investment. The JSF acquisition program’s estimated development and procurement costs have increased. In addition, the number of aircraft it plans to deliver has been reduced. As a result, unit costs for the JSF aircraft have increased substantially, thereby reducing the program’s buying power. The most significant quantity reduction occurred after system development began in 2001, when the program reduced the number of aircraft it plans to procure from 2,852 to 2,443, or by 14 percent. The Navy—concerned that it could not afford the number of tactical aircraft it planned to purchase—reduced the number of JSF aircraft for joint Navy and Marine Corps operations from 1,089 to 680 by reducing the number of backup aircraft needed. However, the Navy has not indicated to the developer the exact mix of the carrier and short takeoff and vertical landing variants it intends to purchase. The cost estimate to fully develop the JSF has increased by over 80 percent. DOD expected that by using a joint development program for the three variants instead of three separate programs, JSF development costs could be cut by about 40 percent. However, cost increases have nearly eroded all of the estimated savings. Development costs were originally estimated at $24.8 billion. By the 2001 system development decision, these costs had increased by $9.6 billion largely because of a 36-month schedule extension to allow more time to mature the mission systems and a more mature cost estimate. By 2004, costs increased an additional $10.4 billion to $44.8 billion. The program office cited several reasons, including efforts to achieve greater international commonality, optimize engine interchangeability, refine the estimating methodology, and extend the schedule for unexpected design work. Almost half of this increase, $4.9 billion, was a result of an approximately 18-month delay for unexpected design work caused by increased aircraft weight that degraded the aircraft’s key performance capabilities. Figure 1 compares the original and latest development cost estimates. Current estimates for the program acquisition unit cost are about $100 million, and the total estimated cost to own an aircraft over its life cycle is $240 million—an increase of 23 percent and 11 percent, respectively. In 1996, the program established unit flyaway cost goals for each variant, expecting the variants to have a high degree of commonality and to be built on a common production line. However, commonality among the variants has decreased, and the cost to produce the aircraft has increased (see table 3). The unit flyaway cost for the conventional takeoff and landing variant has increased by 42 percent; the cost for the short takeoff and vertical landing variant has increased by a range of 37 to 55 percent; and the cost for the carrier variant has increased by a range of 29 to 43 percent. According to program data, a large part of the cost increase since the start of development can be attributed to labor costs for building the airframe and to the costs for producing the complex mission systems. With reduced quantities and increased program costs, the JSF program is now buying fewer aircraft at a higher cost, thereby reducing the program’s buying power. How effectively DOD manages its JSF funds will determine whether it receives a good return on its investment. A sound and executable business case is needed to effectively do this. Our reviews over the past 20 years have consistently found that DOD’s weapon system acquisitions take much longer and cost more than originally planned, causing disruptions and increasing pressures to make unplanned trade- offs to accommodate the resulting budget needs. The timely delivery of the JSF to replace aging legacy aircraft was cited as a critical need by the warfighter at the program start. When the program was initiated, in 1996, it planned to deliver initial operational capabilities to the warfighter in 2010. However, largely because of technical challenges, the program has delayed the delivery of operational aircraft, and current estimates put delivery at 2012 to 2013. Because of these delays, the services may have to operate legacy aircraft longer than expected. These challenges have also delayed interim milestones such as the start of system development, design reviews, and production decisions. Figure 2 illustrates changes to the overall program schedule since it began in 1996 through 2004. The full impact on costs, schedules, and aircraft performance brought about by recent design changes and aggressive software development and flight test programs add risks that may not be fully understood for some time. Continuing uncertainties about total quantities and types of the three JSF variants that the services and the international partners expect to purchase in the future also make it difficult to accurately estimate costs and schedules. In December 2003, DOD estimated program costs based on a notional idea of a restructured program. The cost estimates not only lacked detail but were based on a different aircraft design, development schedule, and procurement plan than what is now being considered. Over the past year, DOD has been working to restructure the JSF program to accommodate changes in the aircraft’s design; until this restructuring is completed, it will be difficult to accurately estimate program costs. The need for design changes largely resulted from the increased weight of the short takeoff and vertical landing variant and the impact it was having on key performance parameters. The other JSF variants’ designs were affected as well. The program plans to have a more comprehensive cost estimate in the spring of 2005. However, a detailed assessment has not been conducted to determine the exact impact that the restructured program will have on meeting performance specifications. Until the detailed design efforts are complete—after the critical design review in February 2006— the program will have difficulty assessing the impact of the design changes on performance. While the program office anticipates that recent design changes will allow the aircraft to meet key performance parameters, preliminary program data indicate that the design is still not meeting several speed, maneuverability, and radar cross section specifications. In addition, program officials noted that they will not know with certainty if the weight problems have been resolved until after the plane is manufactured and weighed in mid-2007. Program officials recognize that JSF’s development schedule is aggressive and are examining ways to reduce program requirements while keeping costs and schedules constant. Design and software teams have found greater complexity and less efficiency as they develop the 17 million lines of software needed for the system. Program analysis also indicated that some aircraft capabilities will have to be deferred to stay within cost and schedule constraints. As a result, the program office is working with the warfighters to determine what capabilities could be deferred to later in the development program or to follow on development efforts while still meeting the warfighter’s basic needs. Many of these capabilities are related to the software-intensive mission systems suite. They are also examining the content and schedule of the planned 7-year, 10,000-hour flight test program. According to the program office, the test program was already considered aggressive, and recent program changes have only increased the risks of completing it on time. Continued uncertainty about the number and mix of variants the services plan to purchase also affects JSF’s acquisition plans. While the Air Force has announced its intention to acquire the short takeoff and vertical landing variant, it has yet to announce when or how many it expects to buy or how this purchase will affect the quantity of the conventional takeoff and landing variant it plans to buy. DOD’s 2003 acquisition report states that the annual total quantity and mix of JSF variants and their related procurement costs for Navy and Marine Corps JSF purchases remains to be determined. Foreign partners have expressed intent to buy about 700 aircraft between 2012 and 2015, but no formal agreements have been signed at this time. The upcoming 2005 Quadrennial Defense Review—an examination of U.S. defense needs conducted every 4 years—could also affect the procurement quantities and schedule. Since the JSF program began, a little over 8 years ago, the program has had five program managers—a new program manager assigned about every 2 years. The development program is estimated to last another 9 years, and it is likely that the program manager currently involved in decisions about key program elements such as design, cost, and schedule will not be responsible for seeing JSF through its completion. In other words, plans accepted now will likely become the responsibility of future program managers. Leading commercial firms limit product development cycle times, thereby increasing the possibility that program managers will remain on programs until they are complete. Holding one program manager accountable for the content of the program when key decisions are made encourages that person to raise issues and problems early and realistically estimate the resources needed to deliver the program. This puts the manager in a good position to deliver a high-quality product on time and within budget. We note that the law governing the defense acquisition workforce recognizes the need for long-term assignments in the performance of the program manager function. Specifically, the assignment period for program managers is required to be at least until completion of the major milestone that occurs closest in time to the date on which the manager has served in the position for 4 years. The JSF program does not have an evolutionary, knowledge-based acquisition strategy that fully follows the intent of DOD’s acquisition policy. This type of strategy is necessary for having an executable business case in the future. The current strategy includes plans to make large production commitments well before system development and testing have been completed, significantly increasing the risk of further delays and cost increases due to design changes and manufacturing inefficiencies. It is also dependent on an aggressive test aircraft delivery schedule and an optimistic funding profile that assumes an unprecedented $225 billion over the next 22 years, or an average of $10 billion a year. DOD plans to bear the financial risk of concurrently developing and initially producing the JSF on a cost reimbursement basis with the prime contractor, an uncommon practice for such a large number of units, until the design and manufacturing processes are mature. Program officials currently have an opportunity to change the acquisition strategy. DOD policy and best practices call for programs to use an acquisition strategy that reflects an evolutionary, knowledge-based approach—that is, one that ensures appropriate technology, design, and manufacturing knowledge are captured at key milestones before committing to increased investments. Our past work has shown that when programs demonstrate a high level of knowledge before making significant commitments, they are able to deliver products within identified resources. In recent years, DOD has revised its acquisition policy to support an evolutionary, knowledge-based approach for acquiring major weapon systems based on best practices. JSF’s acquisition strategy does not fully follow the intent of this policy. Instead, it strives to achieve the ultimate JSF capability within a single product development increment. While the acquisition strategy calls for delivering a small number of aircraft with limited capabilities, the program has committed to deliver the full capability by the end of system development and demonstration in 2013 within an established cost and schedule, contrary to an evolutionary approach. The JSF program bypassed early opportunities to trade or defer to later increments those features and capabilities that could not be readily met. The planned approach will not capture adequate knowledge about technologies, design, and manufacturing processes for investment decisions at key investment junctures. Figure 3 shows a comparison of an evolutionary, knowledge-based process based on best practices and JSF’s more concurrent approach. Successful commercial companies use an evolutionary acquisition approach where new products are developed in increments based on available resources. Companies have found that trying to capture the knowledge required to stabilize the design of a product that requires significant amounts of new content is an unmanageable task if the goal is to reduce cycle times and get the product to the customer as quickly as possible. With an evolutionary acquisition approach, design elements that are not currently achievable are planned for and managed as increments in future generations of the product, and each increment is managed as a separate knowledge-based acquisition, with separate milestones, costs, and schedules. Programs that attain the right knowledge at the right time reduce the risk of incurring design, development, and manufacturing problems that result in cost and schedule overruns. Our past work has shown that to ensure successful program outcomes, a high level of demonstrated knowledge must be attained at three key junctures for each increment in the program. At knowledge point 1, the customer’s needs should match the developer’s available resources—mature technologies, engineering knowledge, time, and funding—before system development starts. This is indicated by a demonstration that technologies needed to meet essential product requirements work in their intended environment and the producer has completed a preliminary design of the product that shows that the design is feasible. At knowledge point 2, the product’s design is stable and has demonstrated that it is capable of meeting performance requirements before transitioning from system integration to system demonstration. This is best indicated by a prototype demonstration of the design and release of 90 percent of the engineering drawings to manufacturing organizations. At knowledge point 3, the product must be producible within cost, schedule, and quality targets and demonstrated to be reliable and work as intended before production begins. This is indicated by a demonstration of an integrated product in its intended environment and by bringing critical manufacturing processes under statistical control. The start of the JSF system development was approved in 2001—well before a match was made between the customer’s requirements and the resources needed to meet those requirements. Many of the technologies needed for the product’s full capabilities were demonstrated only in a lab environment or ground testing and not in the form, fit, or functionality needed for the intended product design. Also, while the program had a proposed technical solution to meet the warfighter’s requirements, it did not deliver a preliminary design based on sound systems engineering principles. At the JSF preliminary design review, held about 1½ years after development started, significant design issues surfaced, potentially affecting the critical performance capabilities of the aircraft. The program has worked to find solutions to design problems, but at a substantial cost. The detailed design work has fallen behind schedule, delaying the critical design reviews for 16 to 22 months. Table 4 compares the product knowledge available at the JSF system development start and the knowledge expected to be available to support future decision points based on the current acquisition plan. Knowing that a product’s design is stable before system demonstration reduces the risk of costly design changes occurring during the manufacturing of production representative prototypes—when investments in acquisitions become even more significant. The JSF program expects to have all critical drawings and a small number of other drawings completed by the planned February 2006 critical design review— the milestone at which design stability is determined. However, these drawings represent only about 35 percent of the total drawings needed to complete the JSF design. While program officials believe that having 35 percent of the total drawings will allow them to track JSF’s design stability, we have found that programs that moved forward with less than 90 percent of the total drawings at the start of the product demonstration phase were challenged to stabilize the design at the same time they were trying to build and test the product. This overlap frequently results in costly design changes and parts shortages during manufacturing, which, in turn, result in labor inefficiencies, schedule delays, and quality problems. The F/A-22 and PAC-3 missile are prime examples of programs that failed to complete 90 percent of their drawings by the critical design review and suffered substantial cost increases and schedule delays. Using prototypes to demonstrate the design is a best practice that provides additional evidence of design stability. JSF will not have this type of demonstration before the critical design review. Prototype testing allows the design to be demonstrated before making costly investments in materials, manufacturing equipment, and personnel to begin building production representative prototypes for the system demonstration phase. The JSF program is building an early prototype of the conventional takeoff and landing variant and plans to use this prototype to validate performance predictions, manufacturing processes, and reliability and maintainability models. According to the current schedule, however, the first demonstrations will occur after the critical design review, after most of the design drawings have been released, and after manufacturing has begun for many of the remaining test aircraft. Any significant design problems found during the prototype demonstrations would likely require more time and money for redesign efforts and retrofitting of test aircraft already in the manufacturing process. In addition to lacking mature technologies and design stability, the JSF program will lack critical production knowledge when it plans to enter low-rate initial production in 2007. Between 2007 and 2013, when the program is scheduled to move to full-rate production, it expects to buy nearly 500 JSF aircraft—20 percent of its planned total buys—at a cost of roughly $50 billion. Under the program’s preliminary plan, it expects to increase low-rate production from 5 aircraft a year to 143 aircraft a year, significantly increasing the financial investment after production begins. Between 2007 and 2009, the program plans to increase low-rate production spending from about $100 million a month to over $500 million a month, and before development has ended and an integrated aircraft has undergone operational evaluations, DOD expects to spend nearly $1 billion a month. To achieve its production rate, the program will invest significantly in tooling, facilities, and personnel. According to contractor officials, an additional $1.2 billion in tooling alone would be needed to ramp up the production rate to 143 aircraft a year. Over half of this increase would be needed by 2009—more than 2 years before operational flight testing begins. Despite this substantial investment, the key event to support the decision to enter low-rate production in 2007 is the JSF’s first flight. Significant commitments will thus be made to JSF production before requisite knowledge is available. This is a much lower standard than called for by best practices. The following are examples of technology, design, and production knowledge that will not have been achieved at the time JSF enters low-rate initial production. Technology: According to information provided by the program office, only one of JSF’s eight critical technologies is expected to be demonstrated in an operational environment by the 2007 low- rate production decision. The remaining seven technologies, which include the complex mission systems and prognostics and health maintenance systems, are not expected to be mature prior to entering production. (See app. III for program office’s projected time frames for demonstrating the eight critical technologies.) Design: Low levels of design knowledge will continue beyond the production decision. Only about 40 percent of the 17 million lines of code needed for the system’s software will have been released. The complex software needed to integrate the advanced mission systems is not scheduled for release until about 2010—3 years after JSF is scheduled to enter production. In addition, most structural fatigue testing and radar cross section testing of full-up test articles—needed to verify the stability of the aircraft’s structural design—are not planned to be completed until 2010. Production: The program will not demonstrate that critical manufacturing processes are in statistical control when it enters production. At that time, only one test aircraft will be completed and delivered. According to the contractor, manufacturing processes will not be under statistical control until after all of the system development and demonstration aircraft have been built. Also, flight testing of a fully configured and integrated JSF (with critical mission systems and prognostics technologies) is not scheduled until 2011. Operational testing to evaluate the effectiveness and suitability of the integrated system will continue until the full-rate production decision in 2013. The JSF, like many past DOD weapons programs, is very susceptible to discovering costly problems late in development when the more complex software and advanced capabilities are tested. In the case of the JSF, several hundred aircraft costing several billions of dollars may already be on order or delivered, making any changes that result from testing costly to incorporate. Figure 4 shows the proposed low-rate initial production plan and how it overlaps with development and test activities. If the JSF program cannot meet aggressive delivery schedules for test aircraft, flight testing will be delayed. Flight testing provides key knowledge about JSF performance needed to make investment decisions for production. The JSF program is attempting to develop three different aircraft, for three different services. All want to fly at supersonic speeds, shoot air-to-air missiles, and drop bombs on a target, but they all have vastly different operational concepts. While each of the variants may look similar externally, subtle design differences provide many needed capabilities that are unique to each service. As a result, the program will attempt to design, build, and test simultaneously three distinct aircraft designs. This difficult task is further complicated by plans to manufacture and deliver in a 5-year period, 15 flight test aircraft and 8 ground test articles. When compared with schedules of other programs with fewer variables, JSF’s schedule is aggressive. For example, the F/A-22 program took almost 8 years to manufacture and deliver nine flight test aircraft and two ground test articles of a single aircraft design. While the first aircraft had only been in assembly for about 8 months, it was already behind schedule as of January 2005. According to the Defense Contract Management Agency, based on the manufacturing status of the center fuselage, wing, forward fuselage, and software development, the first flight, scheduled for August 2006, could be delayed from 2 to 6 months. Late engineering releases to the manufacturing floor have resulted in parts shortages and manufacturing inefficiencies. According to contractor data, as of January 2005, it had taken about 50 percent more labor hours than planned to complete manufacturing efforts. To execute its current acquisition strategy, the JSF program must obtain on average over $10 billion annually in acquisition funds over the next 2 decades. Regardless of likely increases in program costs, the sizable continued investment in JSF—estimated at roughly $225 billion over 22 years—must be viewed within the context of the fiscal imbalance facing the nation within the next 10 years. The JSF program will have to compete with many other large defense programs, such as the Army’s Future Combat System and the Missile Defense Agency’s ballistic missile defense system, for funding during this same time frame. There are also important competing priorities external to DOD’s budget. Fully funding specific programs or activities will undoubtedly create shortfalls in others. Funding challenges will be even greater if the program fails to translate current cost estimates into actual costs. For example, we estimate that another 1-year delay in JSF development would cost $4 billion to $5 billion based on current and expected development spending rates. A 10 percent increase in production costs would amount to $20 billion. The JSF program’s latest planned funding profile for development and procurement—as of December 2003—is shown in figure 5. The program’s acquisition strategy is to concurrently develop, test, and produce the JSF aircraft, creating a risky approach. Because of this risk, the program office plans to place initial production orders on a cost reimbursement basis. According to program officials, a cost reimbursable contract is necessary during the initial production phase because of the uncertainties inherent in concurrent development and production programs that prevent the pricing of initial production orders on a fixed- price basis. Cost reimbursement contracts provide for payment of allowable incurred costs, to the extent prescribed in the contract. They are used when uncertainties involved in contract performance do not permit costs to be estimated with sufficient accuracy to use any type of fixed- price contract. Cost reimbursement contracts require only the contractor’s “best efforts,” thus placing a greater cost risk on the buyer—in this case, DOD. In contrast, a fixed-price contract provides for a pre-established price and places more risk and responsibility for costs and resulting profit or loss on the contractor and provides more incentive for efficient and economical performance. However, to negotiate a fixed-price contract requires certainty about the item to be purchased, which in the case of the JSF will not be possible until late in the development program. The program plans to transition to a fixed-price contract once the air vehicle has a mature design, has been demonstrated in flight test, and is producible at established cost targets. According to program officials, this transition will occur sometime before full-rate production begins in 2013. The program office believes the combination of the early concept development work, the block development approach, and what it characterizes as the relatively small numbers of aircraft in the initial production buys allow decisions to be made earlier than normal with an acceptable level of risk. The JSF program is at a crossroads. DOD has not been able to deliver on its initial promises, and the sizable investment DOD plans to make over the next few years greatly raises the stakes to meet future promises. Given the many uncertainties surrounding JSF’s development, program officials need more time to gain knowledge before committing to a business case. JSF’s failure to adequately match requirements and resources has already resulted in increases in cost, schedule, and performance estimates, and a reduction in DOD’s buying power. The new business case must also be accompanied by an acquisition strategy that adopts an evolutionary approach to product development—one that enables knowledge-based investment decisions to maximize remaining program dollars. While the warfighter may not receive the ultimate capability initially, an evolutionary approach provides a useful product sooner and in sufficient quantities to start replacing the rapidly aging legacy fighter and attack force. The decisions DOD makes now and over the next 2 years will greatly influence the efficiency of its remaining funding—over 90 percent of the $245 billion estimated total program costs. Chief among these are the investments needed to increase production to 143 aircraft a year, increasing production expenditures from $100 million a month to $1 billion a month by 2013. While delays are never welcomed, time taken by DOD now to gain more knowledge and reduce risk before increasing its investment may well save time and money later in development and production. Now is the time to get the strategy right for delivering on the remainder of the investment. With an evolutionary, knowledge-based plan in place, DOD managers will be in a better position to succeed in delivering the warfighter needed capabilities within budgeted resources. Given that DOD has invested only about 10 percent of the estimated cost to develop and produce the JSF aircraft, and that significant investments are planned in the next few years that can lock the program into a higher- risk acquisition, we recommend the Secretary of Defense take the following two actions to increase the likelihood of having a successful program outcome by delivering capabilities to the warfighter when needed and within available resources: (1) Establish an executable program consistent with best practices and DOD policy regarding evolutionary acquisitions. DOD officials should define an affordable first increment, with its own business case that clearly defines the warfighter’s most immediate needs and accurately identifies the resources required to deliver on this needed capability. The business case should be established with a high degree of confidence based on known constraints about technology, engineering knowledge, time, and money. For those warfighter needs that cannot be accommodated within this first increment, the program should outline a strategy to meet these needs through subsequent increments, each dependent on having sufficient product knowledge to start system development and demonstration. Each increment should be managed as a distinct acquisition with its own business case for supporting the investment. (2) Develop and implement a knowledge-based acquisition approach, as called for by best practices and DOD’s acquisition policy, an approach that ensures attainment and use of demonstrated product knowledge before making future investments for each product increment. Before increasing the investment in production resources (tooling, materials, and personnel) greater than investments already in place to support the manufacturing of development test aircraft, the Secretary should ensure knowledge consistent with best practices is captured. This should help minimize the number of low-rate initial production aircraft DOD procures on a cost reimbursement basis, reducing the potential financial risk to the government. The Office of the Under Secretary of Defense (Acquisition, Technology, and Logistics), provided us with written comments on a draft of this report. The comments appear in appendix I. DOD partially concurred with our recommendation that the Secretary establish an executable program that includes an affordable first increment with its own business case that clearly defines the warfighter’s most immediate needs and accurately identifies the resources required to deliver on this capability. DOD stated that the JSF program acquisition strategy is based on an appropriate balance of technical, cost, and schedule risk considerations to achieve program objectives. Warfighter representatives are involved in determining the content for each block capability, and technology maturity is factored into the decision plan that has been endorsed by DOD leadership. DOD stated its JSF management practices achieve the objectives of the GAO recommendation. We believe DOD’s acquisition strategy will not provide the full benefits of an evolutionary approach as suggested by DOD’s policy and best practices. DOD has not structured the JSF development program into increments managed as separate acquisitions with their own cost, schedule, and decision milestones, making the likelihood of successful program outcomes low. The JSF strategy resembles other past major acquisition programs that have attempted to achieve the ultimate capability in a single development increment. DOD has allowed technology development to spill over into product development, weakening any foundation for program cost or schedule estimates. This has led to poor outcomes for other programs, such as the F/A-22 and Comanche, where lengthy and costly development efforts resulted in either program cancellation or a significant reduction in the number of systems to be acquired, a real loss in DOD buying power. Without a true evolutionary approach supported by a business case for each increment, it will be difficult for the JSF program to meet product requirements within current estimates of time and money. DOD also partially concurred with our recommendation to develop and implement a knowledge-based acquisition approach, which ensures attainment and use of demonstrated product knowledge before making future investments for each product increment. The department agrees that a knowledge-based approach is critical to making prudent acquisition decisions and stated that its current JSF acquisition strategy incorporates this type of approach. The department admits it has accepted some concurrency between development and production to reduce schedule and cost, but it will consider the production readiness of the JSF design at the low-rate and full-rate production decision milestones. It states that the new program plan includes clear entry and exit criteria for critical milestones to ensure technologies are mature and required incremental objectives are achieved before obligating funds. DOD stated that it conducts regular program reviews, and the Defense Acquisition Board will review program readiness prior to making any milestone decision. The frequent rotation of program leadership ensures ongoing cooperative oversight of emerging challenges and program decisions, and ensures accountability for the implementation of those decisions. Finally, DOD states that the acquisition strategy is consistent with acquisition directives and ensures the department commits resources only after determining that specific developmental or knowledge-based criteria are achieved. We believe the JSF’s acquisition strategy will not capture the right knowledge at the right time for informed decisions on future investments—over $200 billion dollars. The program does not have the practices in place to capture knowledge at key junctures. DOD will not have captured knowledge before production starts that ensures the design is mature, reliable, and works or that manufacturing processes are in control---keys to successful outcomes in the production phase. Further, the large investments planned in production capability for the JSF over the next few years are vulnerable to costly changes as the aircraft is still being designed and tested. DOD has historically developed new weapon systems in a highly concurrent environment that usually forces acquisition programs to manage technology, design, and manufacturing risk at the same time. While DOD believes it can manage the risk of concurrent development and production by holding regular program reviews and with entrance and exit criteria for decisions, DOD’s own experience has shown this approach to be risky and often not totally effective. This has been DOD’s traditional approach to weapons acquisition, the same approach that has led to programs costing significantly more than planned and taking much longer to develop. This environment has made it difficult to make informed decisions because appropriate knowledge has not been available at key decision points. If decisions are tied to the availability of critical knowledge, program managers can be held accountable for the timely capture of that knowledge instead of less precise or ill-defined criteria included in risk reduction plans. DOD’s practice of frequently changing program managers also decreases accountability because commitments made today will likely not be carried through by the same managers who made the commitments. We are sending copies of this report to the Secretary of Defense; the Secretaries of the Air Force, Army, and Navy; and the Director of the Office of Management and Budget. We will also provide copies to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or Michael Hazard at (937) 258-7917. Other staff making key contributions to this report were Marvin Bonner, Matthew Drerup, Matthew Lea, David Schilling, Karen Sloan, and Adam Vodraska. To determine the status of the Joint Strike Fighter (JSF) business case for delivering new capabilities to the warfighter, we compared the original program estimates with current estimates. For development, we used the program estimates that justified the program when it started in 1996. This was the point at which JSF transitioned from a technology development environment to an acquisition program environment, with the commitment to delivery a family of strike aircraft that meet the Air Force, Navy, and Marine Corps needs. At that time, total production, acquisition, and ownership costs had not been estimated. However, the program had estimated the unit flyaway costs for each variant. The total production, acquisition, and ownership estimates were first established to support the decision to enter the system development and demonstration phase in 2001. We used these estimates as the baseline for these costs. We identified changes in costs, quantities, and schedules as well as the causes for the changes. We also identified program conditions that may affect these estimates in the future. To accomplish this, we reviewed management plans, cost reports, progress briefings, program baselines, risk reports, and independent program assessments. We also interviewed officials from the Department of Defense’s (DOD) acquisition program management office and prime contractor. To evaluate whether the current acquisition plan follows an evolutionary, knowledge-based approach to meeting business case goals in the future, we applied GAO’s methodology for assessing risks in major weapon systems. This methodology is derived from best practices and experiences of leading commercial firms and successful defense acquisition programs. We reviewed Office of the Secretary of Defense (OSD), program office, and prime contractor processes and management actions. We compared the program’s plans and results to date against best practice standards in capturing product knowledge in terms of technology, design, and production maturity information and in applying knowledge to support major program investment decisions. We reviewed management plans, acquisition strategies, test plans, risk assessments, and program status briefings. We identified gaps in product knowledge, reasons for these gaps, and the risks associated with moving forward with inadequate knowledge at future decision points. We also reviewed DOD’s acquisition policy to determine whether JSF’s approach met its intent. In performing our work, we obtained information and interviewed officials from the JSF Joint Program Office, Arlington, Virginia; Lockheed Martin Aeronautical Systems, Fort Worth, Texas; Defense Contract Management Agency, Fort Worth, Texas; Institute for Defense Analyses, Alexandria, Virginia; and offices of the Director, Operational Test and Evaluation, and Acquisition, Technology and Logistics, which are part of the Office of Secretary of Defense in Washington, D.C. Includes integration of propulsion, vehicle management system, and other subsystems as they affect aircraft stability, control, and flying qualities (especially short takeoff and vertical landing). Aircraft improvements are to reduce pilot workload and increase flight safety. Involves the ability to detect and isolate the cause of aircraft problems and then predict when maintenance activity will have to occur on systems with pending failures. Life-cycle cost savings are dependent on prognostics and health management through improved sortie generation rate, reduced logistics and manpower requirements, and more efficient inventory control. Involves designing an integrated support concept that includes an aircraft with supportable stealth characteristics and improved logistics and maintenance functions. Life-cycle cost savings are expected from improved logistics and maintenance functions. Life-cycle cost savings are expected from low observable maintenance techniques and streamlined logistics and inventory systems. Includes areas of electrical power, electrical wiring, environmental control systems, fire protection, fuel systems, hydraulics, landing gear systems, mechanisms and secondary power. Important for reducing aircraft weight, decreasing maintenance cost, and improving reliability. Includes the ability to use commercial-based processors in an open architecture design to provide processing capability for radar, information management, communications, etc. Use of commercial processors reduces development and production costs, and an open architecture design reduces future development and upgrade costs. Includes advanced integration with communication, navigation, and identification functions and electronic warfare functions through improved apertures, antennas, modules, radomes, etc. Important for reducing avionics cost and weight, and decreasing maintenance cost through improved reliability. Involves decreasing pilot workload by providing information for targeting, situational awareness, and survivability through fusion of radar, electronic warfare, and communication, navigation, and identification data. Improvements are achieved through highly integrated concept of shared and managed resources, which reduce production costs, aircraft weight, and volume requirements, in addition to providing improved reliability. Involves lean, automated, highly efficient aircraft fabrication and assembly techniques. Manufacturing costs should be less through improved flow time, lower manpower requirements, and reduced tooling cost.
Under the Ronald W. Reagan National Defense Authorization Act of 2005, GAO is required to to review the Joint Strike Fighter (JSF) program annually for the next 5 years. This is the first GAO report, and it (1) analyzes the JSF program's business case for delivering new capabilities to the warfighter and (2) determines whether the JSF program's acquisition strategy follows an evolutionary, knowledge-based approach. Also, the act requires GAO to certify whether we had access to sufficient information to make informed judgments on the matters contained in our report. GAO found that the original business case for the JSF program has proven to be unexecutable. DOD now plans to buy 535 fewer aircraft than originally planned. Due to increases in total program costs and program acquisition unit costs, the DOD has reduced buying power and is now buying fewer JSF's at a higher investment than originally planned. The first delivery of initial operational capabilities to the warfighter have been delayed 2 years so far. The program's current acquisition strategy does not fully follow the intent of DOD's evolutionary, knowledge-based acquisition policy that is based on best practices. An evolutionary, knowledge-based strategy will be necessary to successfully execute a new business case in the future. Instead, the program plans to concurrently develop the JSF technologies, integrate and demonstrate the expected product design, and produce deliverable fighters, which is a risky approach. Finally, as a result of a lengthy program replanning effort that had been in process during most of 2004, GAO did not have access to the cost estimate expected to be contained in the JSF's Selected Acquisition Report, to be delivered by Congress in the spring of 2005. At the time of GAO's review, JSF program officials were still collecting the necessary information to develop and complete the estimate. Therefore, GAO's review was limited to the estimated program costs contained in the December 31, 2003, Selected Acquisition Report.
Mr. Chairman and Members of the Subcommittee: I am pleased to be here today to assist the Subcommittee in its review of the Commodity Futures Trading Commission’s (CFTC) strategic plan. Hearings such as this one are an important part of assuring that the intent of the Government Performance and Results Act of 1993 (GPRA or Results Act) is met, and we commend you, Mr. Chairman, for holding this hearing. The consultative process provides an important opportunity for Congress and the executive branch to collectively ensure that agency missions are focused, goals are results-oriented, and strategies and funding expectations are appropriate. As you know, the Results Act required executive agencies to complete their initial strategic plans by September 30, 1997, and CFTC met this requirement. My testimony today discusses our review of CFTC’s strategic plan. We specifically determined whether the plan contained each of the six components required by the Results Act and assessed each component’s strengths and weaknesses. We also reviewed the extent to which CFTC consulted with stakeholders, including the other federal financial market regulators. Finally, we identified challenges that CFTC faces in addressing the requirements of the Results Act. CFTC’s strategic plan reflects a concerted effort by the agency to address the requirements of the Results Act. Although the plan could be strengthened in some areas, it compares favorably with the plans of other federal financial regulators that we have reviewed. On the basis of our review, we found that CFTC’s plan contained all of the components required by the Results Act but that some of the components could be strengthened. We also found that the plan could be improved by additional stakeholder input, including interagency coordination. Finally, due to the complex set of factors that determine regulatory outcomes, measuring program impacts presents challenges to CFTC in addressing the requirements of the Results Act, as it does for regulatory agencies in general. However, the use of program evaluations to derive results-oriented goals and to measure the extent those goals are achieved is one key to the success of the process. Notwithstanding the need for improvements, we recognize that CFTC’s strategic plan is a dynamic document that the agency intends to refine. My comments apply to the strategic plan that CFTC formally submitted to Congress and the Office of Management and Budget (OMB) in September 1997. In general, our assessment of CFTC’s plan was based on knowledge of the agency’s operations and programs; past and ongoing reviews of CFTC; results of work on other agencies’ strategic plans and the Results Act; discussions with CFTC, OMB, and Subcommittee staff; and other information available at the time of our assessment. The criteria we used to determine whether CFTC’s plan complied with the requirements of the Results Act were the Results Act itself and OMB guidance on preparing strategic plans (OMB Circular A-11, Part 2). To assess CFTC’s consultation with stakeholders and to identify challenges in implementing the Results Act, we relied on the results of our previous work and/or discussions with CFTC and OMB officials. CFTC is an independent agency that administers the Commodity Exchange Act, as amended, and was created by Congress in 1974. The principal purposes of the act are to protect the public interest in the proper functioning of the market’s price discovery and risk-shifting functions. In administering the act, CFTC is responsible for fostering the economic utility of the futures market by encouraging its efficiency, monitoring its integrity, and protecting market participants from abusive trade practices and fraud. trade-offs that are necessary for effective policymaking. Improving management in the federal sector will not be easy, but the Results Act can assist in accomplishing this task. The Results Act requires executive agencies to prepare multiyear strategic plans, annual performance plans, and annual performance reports. First, the Act requires agencies to develop a strategic plan covering the period of 1997 through 2002. As indicated in the Results Act and OMB guidance, each plan is to include six major components: (1) a comprehensive statement of the agency’s mission, (2) the agency’s long-term goals and objectives for all major functions and operations, (3) a description of the approaches (or strategies) for achieving the goals and the various resources needed, (4) an identification of key factors, external to the agency and beyond its control, that could significantly affect its achievement of the strategic goals, (5) a description of the relationship between the long-term strategic goals and annual performance goals, and (6) a description of how program evaluations were used to establish or revise strategic goals and a schedule for future evaluations. In developing their strategic plans, agencies are to consult with Congress and solicit the views of stakeholders. Second, the Results Act requires executive agencies to develop annual performance plans covering each program activity set forth in their budgets. The first annual performance plans, covering fiscal year 1999, are to be provided to Congress after the President’s budget is submitted to Congress in early 1998. An annual performance plan is to contain the agency’s annual goals, measures to gauge performance toward meeting its goals, and resources needed to meet its goals. And third, the Results Act requires executive agencies to prepare annual reports on program performance for the previous fiscal year. The performance reports are to be issued by March 31 each year, with the first (for fiscal year 1999) to be issued by March 31, 2000. In each report, the agency is to compare its performance against its goals, summarize the findings of program evaluations completed during the year, and describe the actions needed to address any unmet goals. Based on our review, we found that CFTC’s strategic plan contains all of the six major components required by the Results Act. The plan defines the agency’s mission, establishes goals, lists activities to be performed to achieve the goals, identifies key factors affecting the achievement of the goals, discusses the relation between the goals of the strategic and annual performance plans, and addresses methods for evaluating the agency’s programs. However, we identified several areas in which CFTC could improve the plan as it is revised and updated. Consistent with the OMB guidance, the strategic plan contains a brief mission statement that broadly defines CFTC’s basic purposes: to protect market users and the public from abusive practices and to foster open, competitive, and financially sound futures and option markets. In addition, the accompanying background of the mission statement defines the agency’s core responsibilities and discusses the agency’s enabling legislation. Consistent with the OMB guidance, the strategic plan describes CFTC’s goals and general objectives, providing staff with direction for fulfilling the agency’s mission. The agency’s three goals are to (1) protect the economic functions of the commodity futures and options markets, (2) protect market users and the public, and (3) foster open, competitive, and financially sound markets. The plan further defines each goal in terms of a number of outcome objectives. For example, under goal two, the plan includes the outcome objectives of promoting compliance with and deterring violations of federal commodities laws as well as requiring commodities professionals to meet high standards. The OMB guidance notes that a strategic plan’s general goals and objectives should be stated in a manner that allows for future assessment of whether the goals and objectives are being achieved. Although the general goals and outcome objectives support the agency’s mission, most could benefit by being restated in a way that facilitates future assessment of whether they have been achieved. Examples of objectives that could be restated include overseeing markets used for price discovery and risk shifting as well as promoting markets free of trade practice abuse. Consistent with the OMB guidance, the strategic plan lists key activities that staff are to perform to accomplish the outcome objectives and, in turn, general goals. For example, an outcome objective of goal three is to facilitate the continued development of an effective, flexible regulatory environment. The specific activities to be performed for this objective include providing regulatory relief, as appropriate, to foster the development of innovative transactions and participating in the President’s Working Group on Financial Markets to coordinate efforts among U.S. financial regulators. The plan also discusses actions for communicating accountability to CFTC managers and staff. These actions include instituting a performance management system to create a more effective communication tool for mangers and staff and using the annual performance plan to better communicate specific goals and performance levels to staff. The OMB guidance notes that a strategic plan should briefly describe the resources needed to achieve its goals and objectives, for example, in terms of operational processes, staff skills, and technologies, as well as human, capital, and other resources. The guidance further notes that a plan should include schedules for initiating and completing significant actions as well as outline the process for communicating goals and objectives throughout the agency and for assigning accountability to managers and staff for achieving objectives. Although CFTC’s plan lists specific activities to be performed to achieve its goals and objectives, it could be made more informative by discussing the resources needed to perform the activities and by providing schedules for initiating and completing significant actions. Similarly, the plan’s discussion of communicating accountability could be expanded to address how CFTC will assign accountability to managers and staff for achieving objectives. performed to accomplish each goal. Finally, the strategic plan mentions that the annual plan establishes indicators and targets with the goal of ensuring that day-to-day activities are appropriately defined and measured. According to the OMB guidance, a strategic plan should briefly outline the type, nature, and scope of the annual performance goals and the relevance and use of these goals in helping determine whether the strategic plan’s goals and objectives are being achieved. The linkage between the two plans is important because a strategic plan’s goals and objectives establish the framework for developing the annual performance plan. Moreover, annual performance goals indicate the planned progress in that particular year toward achieving the strategic plan’s goals and objectives. While CFTC’s strategic plan discusses performance measures, it does not include performance goals that could be used to indicate the planned progress made each year toward achieving the general goals and objectives. Moreover, its measures focus on activities that are generally not stated in a manner that allows for future assessments and that may not always measure the intended outcomes. Examples of such measures include “potential violators deterred,” “informed market users,” and “high level of compliance fostered.” CFTC could strengthen its plan by discussing performance goals and developing more results-oriented performance measures against which actual performance can be compared. As discussed below, regulatory agencies, such as CFTC, face barriers in developing such measures. Consistent with the OMB guidance, the strategic plan discusses some external challenges that could alter CFTC’s ability to meet its goals and objectives, and it also discusses the strategies for meeting such challenges. The external challenges include the growing use of over-the-counter derivatives; structural changes in the financial services industry, including the convergence of the securities, futures, insurance, and banking industries; and globalization of financial markets. Strategies to address such challenges include fostering strong relationships with foreign authorities and responding to structural changes to ensure a level playing field as the futures, insurance, securities, and banking industries become more integrated. diminishing resources, recruiting and retaining qualified staff, and remaining abreast of technology. Strategies to address such challenges include reviewing resource requirements for operations and programs to ensure sound fiscal management, setting standards for staff recruitment, and implementing the agency’s data processing plan. According to OMB guidance, a strategic plan should not only discuss key external factors but also indicate their link to particular goals and describe how the factors could affect the achievement of the goals. While the plan discusses external factors and strategies for addressing them, the link between particular factors and goals is not clear. CFTC could strengthen its plan by describing how the external factors are linked with particular goals and how a particular goal could be affected by the external factors. Also, the plan might benefit from a discussion of external factors that represent significant challenges for the financial industry, such as those relating to the “year 2000” computer dating problem and those relating to proposals for revising the Commodity Exchange Act that could affect CFTC’s jurisdiction and that of other federal financial market regulators. The strategic plan specifies that CFTC will use methods and processes that are already in place to evaluate how well it is implementing its strategic and annual performance plans for the first 3 years. According to the plan, these processes are to provide information on, among other things, program accomplishments, staff activities, and CFTC’s financial condition and resource usage. However, the plan also explains that the reporting process related to program accomplishments will be evaluated to determine how it may be used for reporting on program progress toward meeting the goals, outcome objectives, and activities in the strategic plan as well for setting overall priorities and allocating resources consistent with those priorities. Similarly, reviews and evaluations are described for the systems related to staff activities and resource usage. As such, we note that CFTC’s evaluations are to be of its existing measurement and monitoring systems and that CFTC does not appear to be planning any evaluations of the manner and extent to which its programs achieve their objectives. The OMB guidance notes that a strategic plan should briefly describe program evaluations used to prepare the plan and provide a schedule for future evaluations outlining the methodology, scope, and issues to be addressed. CFTC’s plan does not mention whether any evaluations were used to prepare the plan; however, CFTC officials told us that no evaluations were used. As CFTC revises and updates its plan, the plan could be made more useful by including the results of program evaluations used to prepare the plan. It could also be made more informative by discussing the timing and scope of future program evaluations as well as the particular issues to be addressed. In developing their strategic plans, agencies are to consult with Congress and solicit the views of stakeholders—those potentially affected by or interested in the plan. Agencies have discretion in determining how this consultation is conducted. The OMB guidance notes that some general goals and objectives will relate to cross-agency functions, programs, or activities. In such cases, it instructs agencies to ensure that appropriate and timely consultation occurs with other agencies during the development of strategic plans with cross-cutting goals and objectives. CFTC’s strategic plan identifies numerous stakeholders, stating that they are valuable resources that must be tapped to provide critical feedback on the agency’s goals and priorities. The stakeholders identified in the plan include futures exchanges, the National Futures Association, market users, and other federal departments and agencies. The plan also discusses CFTC’s working relationships with other organizations and jurisdictions. For example, it notes that CFTC staff work through various intergovernmental partnerships to consult on issues of importance to CFTC and other federal financial regulators, including federal securities and bank regulators. chairperson was contacted and asked to provide feedback on the draft plan, and CFTC officials told us that the draft plan had been provided to the other federal financial market regulators for comment. Nonetheless, CFTC officials told us that there were only two parties outside of Congress, OMB, and at your request, the GAO that had provided the agency feedback on the plan, as of October 16, 1997. CFTC officials told us that they plan to use the same approach in developing future plans as they did in developing the current plan. CFTC’s lack of success with this approach suggests that the agency should consider alternative approaches. In enacting the Results Act, Congress realized that the transition to results-oriented management would not be easy. Moving to a results orientation could be especially difficult for CFTC and other regulatory agencies. We analyzed a set of barriers facing certain regulatory agencies in their efforts to implement the Results Act in a June 1997 report. These barriers included the following: (1) problems collecting performance data, (2) complexity of interactions and lack of federal control over outcomes, and (3) results realized only over long time frames. To some extent, each of these barriers is applicable to CFTC. As implementation of the Results Act proceeds, CFTC, like other regulatory agencies, is likely to continue encountering barriers to establishing results-oriented goals and measures and, as a result, in evaluating program impact. Although developing performance measures and evaluating program impact are difficult, it is important that CFTC and other regulatory agencies continue their effort toward that end. Any new methods or research approaches developed by one agency could also be useful to others. continuing to work with the Congress and CFTC to ensure that the requirements of the Results Act are met. Mr. Chairman, this concludes my prepared statement. I will be pleased to respond to any questions that you or Members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO assessed the Commodity Futures Trading Commission's (CFTC) strategic plan for compliance with the Government Performance and Results Act. GAO noted that: (1) CFTC's strategic plan contained all of the major components required by the Results Act; (2) there are several areas in which CFTC could improve its plan; (3) the plan defines goals and objectives that supported CFTC's mission, but most of these could benefit by being restated in a way that would facilitate future assessment; (4) the plan identifies activities for achieving CFTC's goals and objectives, but could be more informative by including the resources needed for the activities, schedules for completing key actions, and ways for assigning accountability to managers and staff; (5) the plan's discussion of the relationship between goals in the annual and strategic plans could be strengthened by including more results-oriented performance measures that could be used to reflect progress made toward achieving its goals; (6) the plan identifies some key external factors that could affect the agency's ability to achieve its goals, but the plan could be improved by describing how such factors are linked to particular goals and how a particular goal can be affected by a specific factor; (7) the plan indicates that CFTC will use its existing processes to evaluate its programs, but the plan could be expanded to include information on the timing and scope of future evaluations; (8) the draft plan was made available to stakeholders late in the process and reflects limited consultation with stakeholders during plan development; (9) the plan does not discuss how CFTC will incorporate stakeholders' views in the development of future plans, and (10) although developing performance measures and measuring program impacts present challenges to CFTC and to other regulatory agencies in addressing the requirements of the Results Act, it is important that CFTC and these agencies continue their efforts toward that end.
In the late 1980s, changes in the national security environment resulted in a Defense infrastructure with more bases than the Department of Defense (DOD) needed. To enable DOD to close unneeded bases and realign others, Congress enacted base closure and realignment (BRAC) legislation that instituted base closure rounds in 1988, 1991, 1993, and 1995. For the 1991, 1993, and 1995 rounds, special BRAC Commissions were established to recommend specific base closures and realignments to the President, who, in turn sent the Commissions' recommendations and his approval to Congress. A special commission was also established for the 1988 round that made recommendations to the Committees on Armed Services of the Senate and House of Representatives. For the 1988 round, legislation required DOD to complete its closure and realignment actions by September 30, 1995. For the 1991, 1993, and 1995 rounds, legislation required DOD to complete all closures and realignments within 6 years from the date the President forwarded the recommended actions to the Congress. BRAC has afforded DOD the opportunity to reduce its infrastructure and free funds for high priority programs such as weapons modernization and force readiness. As the closure authority for the last round expires in fiscal year 2001, DOD has reported reducing its domestic infrastructure by about 20 percent and saving billions of dollars that would otherwise have been spent supporting unneeded infrastructure. In essence, reported savings include both distinct savings that actually occur during the budget year or years a BRAC decision is implemented and cost avoidances during future years--costs that DOD would have incurred if BRAC actions had not taken place. Some of the savings are one-time, such as canceled military construction projects. The vast majority of BRAC savings represent a permanent and recurring avoidance of spending that would otherwise occur, such as for personnel. Over time, the value of the recurring savings is the largest and most important portion of overall BRAC savings. DOD reports its BRAC cost and savings estimates to the Congress on a routine basis as part of its annual budget requests. In preparing the estimates, DOD guidance to the military services and defense agencies states that the estimates are to be based on the best projection of what savings will actually accrue from approved realignments and closures. In this regard, prior year estimated savings are required to be updated to reflect actual savings when available. The Congress recognized that an up-front investment was necessary to achieve BRAC savings and established two accounts to fund certain implementation costs. These costs included (1) relocating personnel and equipment from closing to gaining bases, (2) constructing new facilities at gaining bases to accommodate organizations transferred from closing bases, and (3) remedying environmental problems on closing bases. DOD, in its annual budget request, provides the Congress with estimated cost data relative to the implementation of each BRAC round. For the most part, these estimated costs are routinely updated as they are recorded on an ongoing basis in DOD's financial accounting systems. Since we last reported on this issue in December 1998, DOD has increased its net savings estimate for the four BRAC rounds. DOD now estimates a net savings of about $15.5 billion through fiscal year 2001, an increase of $1.3 billion from the previously reported $14.2 billion. DOD data suggest that cumulative savings began to surpass cumulative costs in fiscal year 1998. The increase in net savings is attributable to a combination of lower estimated costs and greater estimated savings, as reported in DOD's fiscal year 2001 budget request and documentation. Overall, DOD has reduced its cost estimates from fiscal year 1999 to fiscal year 2001 for implementing BRAC by about $723 million and increased its savings estimates by about $610 million, resulting in a net savings increase of $1.3 billion. Table 1 summarizes the cumulative cost and savings estimates through fiscal year 2001 for the four BRAC rounds as reflected in DOD's fiscal years 1999 and 2001 BRAC budget requests and documentation, along with associated changes in the various costs and savings categories. In addition to the estimates shown in table 1, DOD now reports annual estimated recurring savings of $6.1 billion beyond fiscal year 2001, an increase from approximately $5.6 billion that DOD reported in fiscal year 1999. As shown in table 1, the cost estimates for implementing the four BRAC rounds have decreased by about $723 million from $22.9 billion to $22.2 billion with most of the decrease, or about $359 million, attributable to lower reported environmental restoration costs through fiscal year 2001. Our analysis of the data shows that most, or about $313 million, of the environmental cost reduction occurred in the Navy BRAC account. Some of this can be attributed to shifting planned actions to future years. Further, estimated revenues generated from actions—such as land sales, property leases, and other reimbursements—have increased by $180 million to $300 million, thereby increasing the offset to BRAC program cost estimates. According to the Air Force, its increased revenues resulted from the reporting of reimbursements received from the city of Chicago, Illinois, for the cost of moving an Air National Guard unit from O'Hare International Airport to Scott Air Force Base, Illinois, and from increased proceeds from land sales and property leases. In addition to reductions in estimated costs, DOD is reporting over $610 million in additional estimated savings through 2001 in its closure accounts. Our analysis shows that more than half, or $381 million, of the $610 million increase in savings shown in table 1 is attributable to Air Force operation and maintenance. Air Force officials told us that the savings increase was attributable to actions at two bases—McClellan Air Force Base, California, and Kelly Air Force Base, Texas. While the Air Force did not provide an estimate for savings at these two bases in its fiscal year 1999 budget request because of uncertainties regarding the performance of the bases' workloads, it reported a $381 million savings estimate in its fiscal year 2001 budget request. Further, an additional $101 million in increased savings is due primarily to inflationary adjustments in the estimated post-implementation savings for the 1988, 1991, and 1993 rounds through fiscal year 2001. Post-implementation savings for the 1995 round do not begin accruing until fiscal year 2002. In addition to the revisions made to the cost and savings estimates through fiscal year 2001, DOD has also revised its annual recurring savings estimate for fiscal years 2002 and beyond. DOD is now projecting annual recurring savings of $6.1 billion for the four BRAC rounds, an increase of approximately $500 million from the $5.6 billion DOD reported in fiscal year 1999. Our analysis shows that the increase is attributable equally to an increase in the BRAC 1995 round savings estimate and to a reported increase in prior rounds' recurring savings caused by using an inflation factor to convert them into current year dollars. Our prior work, along with work by others including the Congressional Budget Office, the DOD Inspector General, and the Army Audit Agency, has shown that BRAC savings are real and substantial, and are related to cost reductions in key operational areas as a result of BRAC actions. At the same time, limitations have existed in DOD's efforts to track actual costs and savings over time, which limits the precision of its net savings estimate. Audits of BRAC financial records have shown that BRAC has enabled DOD to save billions of dollars, primarily through the (1) overall elimination or reduction of base support costs at specific installations, (2) elimination or reduction of military and civilian personnel costs, and (3) cancellation of military construction and family housing projects at closed or realigned bases. Our prior work as well as work of others has shown that eliminating or reducing base support costs at closed or realigned bases is a major contributor to generating BRAC savings. Savings are realized through a number of actions, such as terminating physical security, fire protection, utilities, property maintenance, accounting, payroll, and a variety of other services that have associated costs linked specifically to base operations. For example, as stated in an April 1996 report, our analysis of the operation and maintenance costs at eight closing installations from the 1988 and 1991 rounds indicated that base support costs had been reduced and that annual recurring savings would be substantial—about $213 million—after initial costs were recouped. DOD Inspector General and Army Audit Agency reports have also shown base support reductions at closing and realigning facilities as real and substantial, although not precise. The DOD Inspector General, in affirming savings for a sample of bases in the 1993 BRAC round, consistently found that the services had significantly reduced their operating budgets because of the closure process. The elimination or reduction of military and civilian personnel at closed or realigned bases is also a major contributor to generating savings. In an April 1998 report, DOD estimated that about 39,800 military personnel and about 71,000 civilian positions had been eliminated by BRAC, resulting in an overall recurring savings of about $5.8 billion annually. While we were not able to precisely reconcile these estimated reductions with actual BRAC-related end strength reductions in the services, we reported that the large number of personnel reductions was a significant contributor to the substantial savings achieved through BRAC. DOD Inspector General and Army Audit Agency reports have validated personnel savings at various BRAC locations, although the savings estimates were not well documented in many cases. In other cases, the personnel reductions were greater than estimated. For example, in a review of nine 1995 BRAC bases, the Army Audit Agency found that, in contrast to no savings being identified for the elimination of civilian personnel authorizations at tenant activities providing support to BRAC bases, over $13 million in net recurring savings had accrued. Additionally, the cancellation of planned military construction of facilities and family housing at closed or realigned bases contributes to the savings generated from BRAC. Prior DOD Inspector General and Army Audit Agency reports have affirmed savings attributable to such cancellations. For example, in a May 1998 report, the DOD Inspector General reported that, after a review of a Navy-reported savings of about $205 million from cancelled military construction projects in the 1993 round, the savings were actually $336 million, or $131 million more than reported. Finally, as we reported in 1998, DOD, as part of its budgeting process, has subtracted projected BRAC savings from the expected future cost of each service's funding plans in the Future Years Defense Program. While our work has consistently shown that savings from BRAC actions are expected to be substantial, we have also noted the cost and savings estimates are imprecise. This relates to the development of initial estimates and efforts to track changes in these estimates over time. While cost estimates are routinely updated and tracked in financial accounting systems, they are based on DOD obligations and not actual outlays, thereby adding a degree of imprecision to the actual costs and the basis for savings projections. Also, as we have previously reported, a fundamental limitation in DOD's ability to identify and track savings from BRAC closures and realignments is that DOD's accounting systems, like all accounting systems, are not oriented to identifying and tracking savings.Savings estimates are developed by the services at the time they are developing their initial BRAC implementation budgets and are reported in DOD's BRAC budget justifications. Because the accounting systems do not track savings, updating these estimates would require a separate tracking method or system. Our prior work has shown that the savings estimates have been infrequently updated and, unlike for estimated costs, no method or system has been established to track savings on a routine basis. Over time, this contributes to imprecision as the execution of closures or realignments may vary from the original plans. Further, because arguments can be made as to what costs or savings can be definitely attributed to BRAC, such as environmental restoration costs, the precision of the estimates comes into question. Nevertheless, we and others have consistently expressed the view that these factors are not significant enough to outweigh the fact that substantial savings are being generated from the closure process. In reports issued in November and December 1998, we concluded that, while closure and realignment savings for the four BRAC rounds would be substantial after initial costs were recouped, the estimates were imprecise. In particular, we cited that savings estimates were not being routinely updated and that federal economic assistance costs of over $1 billion that had been provided to communities and individuals impacted by BRAC were not included in DOD's reported costs. Those economic assistance costs now exceed $1.2 billion. While the inclusion of these costs as attributable to BRAC has the effect of delaying the point at which savings surpass costs, it does not negate the fact that the savings are substantial. A July 1998 Congressional Budget Office report also indicated substantial BRAC savings, even though there was imprecision in DOD's cost and savings estimates. In its comments on cost estimates, the Congressional Budget Office cited that not all BRAC-related costs are included in the estimates. As we had also pointed out, the Budget Office cited federal economic assistance costs as not being included in the estimates. Further, the Budget Office pointed out that operating units sometimes had borne unexpected costs when services at DOD facilities were temporarily impacted by BRAC actions. As to savings, the Congressional Budget Office stated its belief that DOD's estimate of $5.6 billion in annual recurring savings at that time was reasonable, given that the Budget Office's estimate was about $5 billion annually. DOD Inspector General reports also pointed out substantial BRAC savings, despite imprecision in cost and savings estimates. In its May 1998 report of more than 70 closed or realigned bases during the 1993 BRAC round, the Inspector General found that, for the 6-year implementation period for carrying out the BRAC Commission's recommendations, the savings would overtake the costs sooner than expected. While DOD's original budget estimate indicated costs of about $8.3 billion and annual recurring savings of $7.4 billion during the implementation period, the Inspector General concluded that costs potentially could be reduced to $6.8 billion and that savings could reach $9.2 billion, a net savings of $2.4 billion. The Inspector General's report indicated that the greater savings were due to such factors as reduced obligations that were not adjusted to reflect actual disbursements, canceled military construction projects, and a lower increase in overhead costs at bases receiving work from closing bases. On the other hand, an Inspector General's review of 23 bases closed during the 1995 BRAC round noted that savings during the implementation period were overstated by $33.2 million, or 1.4 percent, and costs were overstated by $28.8 million, or 4.5 percent of initial budget estimates. Also, the Army Audit Agency, in a July 1997 report on BRAC costs and savings, concluded that savings would be substantial after full implementation for ten 1995 BRAC round sites it had examined but that estimates were not exact. For example, the Agency reported that annual recurring savings beyond the implementation period, although substantial, were 16 percent less than the major commands' estimates. The difficulty in precisely identifying savings is further complicated if one considers the specific actions being undertaken under the BRAC process. For example, while environmental restoration costs are a valid BRAC expenditure, DOD reported that the vast majority of its BRAC environmental restoration costs would have been incurred whether or not an installation is impacted by BRAC. DOD acknowledges, however, that environmental costs under the BRAC process may have been accelerated in the shorter term. Others suggest that in some instances BRAC-related environmental cleanup may be done more stringently than would have been the case had the installation remained open. However, the marginal difference is not easily quantified and depends largely on the end use of the closed installation. To the extent that much of the environmental cost is not considered as an additional cost to DOD, this has the effect of increasing net savings, especially considering that DOD estimates $7 billion in BRAC-related environmental costs through fiscal year 2001. DOD also expects to spend $3.4 billion in environmental costs beyond fiscal year 2001. This is a $1 billion increase over the $2.4 billion environmental cost estimate DOD reported in fiscal year 1999. According to DOD officials, this increase is attributable primarily to the inclusion of cleanup costs for unexploded ordnance, the refinement of cleanup requirements and DOD’s cost estimates, and the utilization of more stringent cleanup standards due to changes in the end use of closed installations. While the $3.4 billion in environmental costs is not reflected in DOD's $6.1 billion annual recurring savings estimate, these costs are spread over many years and should have limited impact on cumulative long-term savings. A similar case can be made for new military construction at receiving bases under the BRAC process. While significant funds have been expended on new military construction (an estimated $6.7 billion through fiscal year 2001), the military did benefit from the improvement in its facilities infrastructure. While this is somewhat difficult to precisely quantify, it appears that some portion of the cost would have been incurred under DOD's facilities capital improvement initiatives. If so considered, this would also have the effect of increasing net BRAC savings. In commenting on a draft of this report on July 25, 2001, the Deputy Under Secretary of Defense for Installations agreed with our findings. This official also provided technical clarifications, which we have incorporated as appropriate. To determine the extent to which cost and savings estimates have changed over time, we compared the data contained in DOD's fiscal year 2001 BRAC budget request and documentation with similar data in the fiscal year 1999 budget request and documentation, which were the latest documents available since we last reported on this issue in December 1998. We noted revisions in the data and identified where major changes had occurred in the various costs and savings categories within the BRAC account. To the extent possible within time constraints, we discussed with officials of the Office of the Secretary of Defense and military services the rationale for those cases where the changes were significant, but we did not independently verify the validity of DOD's reported cost and savings data. We are continuing to examine the basis for the changes in DOD’s cost and savings estimates and will discuss the issue in greater detail in an overall status report on BRAC that we expect to issue in early 2002. To comment on the validity of the net savings estimates, we relied primarily on our prior BRAC reports and reviewed Congressional Budget Office, DOD, DOD Office of Inspector General, and service agency audit reports. As part of our ongoing broader review of BRAC issues, we are examining the extent to which the military services have updated their cost and savings estimates since we last reported on this issue in December 1998. We will discuss that issue in more detail in the status report that we expect to issue in early 2002. In assessing the accuracy of the cost and savings data, we reviewed the component elements that DOD considered in formulating its overall BRAC savings estimates. Because DOD did not include in its estimates federal expenditures to provide economic assistance to communities and individuals affected by BRAC, we collected these expenditure data from DOD’s Office of Economic Adjustment and considered them in our analysis of the estimated BRAC savings. We conducted our review in June and July 2001 in accordance with generally accepted government auditing standards. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Defense, the Army, the Navy, and the Air Force; and the Director, Office of Management and Budget. We also make copies available to others upon request. Please contact me on (202) 512-8412 if you or your staff have any questions concerning this report. Key contributors to this report were Mark Little, James Reifsnyder, Michael Kennedy, and Tom Mahalek.
Through four rounds of base closures and realignments between 1988 and 1995, the Department of Defense (DOD) expected to reduce its domestic infrastructure and provide needed dollars for high priority programs, such as weapons modernization. Although DOD projects it will realize significant recurring savings from the closures and realignments, Congress continues to raise questions about how much, if any, money has been saved through the base closure process. Two GAO reports issued in late 1998 concluded that net savings from the four closure rounds were substantial but that the cost and savings estimates used to calculate the net savings were imprecise. This report reviews (1) the basis for DOD's recent increase in net savings projected to be realized from the closure process and (2) GAO's previous observations on the basis for savings from base closure and realignment actions and the precision of the cost and savings estimates. DOD's fiscal year 2001 budget request and documentation show that it now expects net savings of about $15.5 billion through fiscal year 2001 and about $6.1 billion in annual recurring savings thereafter, an increase from the $14.2 billion and about $5.6 billion, respectively, DOD reported in fiscal year 1999. GAO's analysis of the data showed that the net savings increase through fiscal year 2001 was due primarily to an overall reduction of about $723 million in reported costs and an increase of about $610 million in expected savings resulting from the closures. The net savings for the four rounds of base closures and realignments are substantial and are related to decreased funding requirements in specific operational areas. Reviews by the Congressional Budget Office, the DOD Inspector General, and the Army Audit Agency have affirmed that net savings are substantial after initial investment costs are recouped. However, those same reviews also showed that the estimates are imprecise and should be viewed as a rough approximation of the likely savings.
To determine what experts and the available research indicate about the types of reentry programs and substance abuse programs that are effective or cost beneficial for juvenile offenders, we reviewed relevant literature, studies, and federal resources for juvenile justice programs, and interviewed federal officials and 26 juvenile justice experts. Specifically, to identify the types of programs to review, we conducted a literature search for studies and articles, including evaluations of juvenile reentry and juvenile substance abuse programs in the United States that were published from May 30, 1999, through May 30, 2009. We chose this time frame, the past 10 years, because it provided us with an overview of the available research, including unpublished and ongoing studies, which assesses the effectiveness of reentry and substance abuse programs. We also consulted with OJJDP officials who coordinate research on juvenile justice programs and Department of Health and Human Services officials who oversee substance abuse and adolescent programs to obtain their recommendations for repositories—online databases that contain information on effective programs—and research studies and relevant Web sites for identifying types of reentry and substance abuse programs. Using these recommendations, information from relevant literature, and categories of program types used by OJJDP’s Model Programs Guide, we identified five types of juvenile justice programs that are used to address reentry issues and five types of programs that are used to address substance abuse issues for juvenile offenders. Specifically related to substance abuse, we focused on substance abuse programs that involved relapse prevention treatment for juvenile offenders with substance abuse histories. After consulting with experts and reviewing the literature, we excluded juvenile alcohol abuse programs and substance abuse programs for the general juvenile population as well as at-risk juveniles who are prone to, but have not yet developed, substance abuse problems. For instance, we excluded after school or recreation programs, conflict resolution programs, and school or classroom programs. While all of these programs may have a substance abuse component, this component is not designed to address juvenile offenders’ actual substance abuse problems. After identifying the types of programs to be reviewed, we looked at online databases, academic research, and professional organizations to select subject matter experts—researchers and practitioners—to obtain their views on the types of programs that have been shown to be effective or cost beneficial and the basis they used for making such determinations. We specifically identified researchers who focus on juvenile reentry issues or substance abuse issues and practitioners who operate programs that address these issues. We chose 26 experts to interview as a result of this process. Specifically, we selected 13 individuals with expertise related to juvenile reentry programs, 7 individuals with expertise related to juvenile substance abuse programs, and 6 individuals with both juvenile reentry and substance abuse program expertise. We selected these experts based on several criteria, including their employment histories related to juvenile reentry and substance abuse programs and the number of years they spent studying, evaluating, or managing programs addressing juvenile reentry or substance abuse issues. We evaluated their experience by reviewing the studies the researchers had completed and determining the experience the practitioners had managing the types of juvenile reentry and substance abuse programs selected for our review. See appendix I for the list of experts we interviewed. We asked these experts to provide their views about the effectiveness of program types (e.g., drug courts), rather than about the effectiveness of individual intervention programs (e.g., a specific drug court program that was implemented in one county). Because the Model Programs Guide, like other online repositories, contains information about the effectiveness of individual intervention programs, it does not provide information about the effectiveness of program types. As a result, we were interested in obtaining the experts’ consolidated views of the effectiveness of program types. We also asked the experts to identify other program types—in addition to those that we explicitly asked about—that they considered to be effective or cost beneficial, but no additional program types were mentioned. In addition, we asked the experts to identify factors that in their view could help programs to achieve intended outcomes, such as reducing participants’ recidivism, which are summarized in appendix II. While the results of these interviews cannot be generalized to reflect the views of all experts knowledgeable about juvenile reentry or substance abuse programs, we believe the interviews provided us with a good overview of the available research and valuable information about what program types are considered to be effective by subject matter experts. In addition, while we did not assess the methodological rigor of studies and evaluations in our review, we corroborated expert testimony by reviewing and summarizing the studies or evaluations that experts cited as the basis for their opinions. We also provided the experts with a summary of their opinions to review in order to ensure that we correctly captured their views. To identify the extent to which OJJDP has efforts under way to disseminate information about effective juvenile justice programs and assess the extent to which OJJDP ensures the utility of the information provided, we reviewed documentation, such as OJJDP’s annual reports outlining information dissemination efforts, OJJDP publications, and a contract related to disseminating training information on effective programs. We interviewed knowledgeable OJJDP officials, such as the Training Coordinator and communications policy personnel, about OJJDP’s efforts to disseminate information about effective programs. We selected two of OJJDP’s efforts through which it disseminates information about effective programs—the Model Programs Guide and the National Training and Technical Assistance Center (NTTAC), which provides training and support to the juvenile justice field in identifying and implementing effective programs—because they provide information about effective programs across the range of issue areas in which OJJDP is involved, including reentry and substance abuse programs. We then compared these efforts to guidance articulated by the Office of Justice Programs (OJP), which oversees OJJDP, and in prior GAO reports that stresses the importance of assessing whether the information disseminated is meeting the needs of its users. We also interviewed representatives from the two organizations that manage these two information dissemination efforts. Additionally, we asked the 26 juvenile reentry and substance abuse experts we interviewed about their views regarding OJJDP’s information dissemination efforts and their opinions about the effectiveness of these efforts. Although their views cannot be generalized to the entire juvenile justice field, we believe that the experts provided us with a good overview of the utility of the information disseminated by OJJDP. We did not contact recipients of the information OJJDP disseminates for their views on the usefulness of the information provided because of the large volume of recipients and the resulting cost that would be incurred to obtain this input. To assess the extent to which OJJDP has plans in place for its research and evaluation efforts, we reviewed relevant laws related to the office’s role in supporting research and evaluations of juvenile justice programs. We also reviewed relevant DOJ and OJJDP documentation, such as annual reports and strategic plans that contain information on OJJDP’s research and evaluation goals and plans. We interviewed cognizant OJJDP officials about the office’s planning efforts related to research and evaluation. We also reviewed criteria found in standard practices for program management and our prior products that highlight the importance of developing plans to meet goals and help ensure that resources are used effectively, and then compared these criteria to OJJDP’s stated plans. Additionally, we analyzed OJJDP funding and staff data for fiscal years 2005 through 2009 to better understand the resources the office has had available to support its evaluation activities. We chose these years because they provide the most recent overview of OJJDP’s research and evaluation funding. We conducted this performance audit from April 2009 through December 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Within states’ juvenile justice systems, reentry aims to promote the effective reintegration of juvenile offenders back into communities upon release from residential facilities. Reentry is a process that incorporates a variety of programs to assist juvenile offenders in the transition from residential facilities to communities. In addition, reentry is intended to assist juvenile offenders in acquiring the life skills needed to succeed in the community and become law-abiding citizens and can incorporate the use of education, mental health, drug rehabilitation, or vocational training programs. While reentry begins after a juvenile is released back into the community, to help ensure a seamless transition, a reentry process begins after sentencing, then continues through incarceration, and into the period of release back into the community. According to OJJDP, juvenile justice practitioners and researchers believe that providing supervision and services to juvenile offenders returning to the community will reduce the high rate of recidivism among these juveniles. Several types of programs address juvenile reentry issues, as described in table 1. Substance abuse includes, but is not limited to, the use or abuse of illegal drugs (e.g., heroin), prescription drugs, and nonprescription drugs (e.g., over-the-counter medications available without a prescription, such as cough suppressant). Treatment of substance abuse may occur in a variety of different settings, such as in clinics on an outpatient basis or at a hospital. Treatment can also occur in short- and long-term residential facilities that range from secure environments where juveniles’ activities are physically restricted, to group homes, which are nonsecure settings where juveniles live and receive services in a homelike environment. According to the Center for Substance Abuse Treatment, juveniles with addictions to substances can be helped through programs that specifically target the factors associated with substance abuse—such as a family history of such abuse. For example, substance abuse intervention programs, such as cognitive behavioral therapy and family therapy, aim to change a juvenile’s behavior by focusing on improving a juvenile’s response to situations that contributed to prior substance abuse. Substance abuse intervention programs can be provided to juvenile offenders throughout the juvenile justice system: after sentencing, during incarceration, and after release back into the community. Whether treatment occurs while a juvenile is incarcerated or after the juvenile is released into the community, according to OJJDP, effective intervention programs can help addicted juveniles to overcome their substance abuse, lead crime-free lives, and become productive citizens. Table 2 describes types of programs—in addition to cognitive behavioral therapy and wraparound/case management, which are discussed in table 1—that address juvenile substance abuse issues. The Juvenile Justice and Delinquency Prevention Act (JJDPA) established OJJDP in 1974. As the only federal office charged exclusively with preventing and responding to juvenile delinquency and with helping states improve their juvenile justice systems, OJJDP supports its mission through a variety of activities. For example, OJJDP administers a wide variety of grants to states, territories, localities, and public and private organizations through formula, block, and discretionary grant programs; provides training and technical assistance; produces and distributes publications and other products containing information about juvenile justice topics; and funds research and evaluation efforts. In fiscal year 2009, the total appropriation for juvenile justice programs was about $374 million. See appendix III for more detailed information on OJJDP’s enacted appropriations for fiscal years 2007 through 2009. OJJDP, through its various grant programs, has provided funding to states and organizations to support juvenile reentry and substance abuse programs, although the JJDPA does not specifically require OJJDP to fund them. States generally have the authority to determine how formula and block grants are allocated and may use these funds to support a range of program areas, including programs specifically for reentry or substance abuse. For example, from fiscal years 2007 through 2008, OJJDP reported that states used approximately $7.1 million in applicable formula and block grant funds for programs that target reentry and $19 million in formula and block grant funds for programs that target substance abuse, representing approximately 1.8 percent and 4.5 percent, respectively, of such funding for those years. Additionally, from fiscal years 2007 through 2009, OJJDP awarded a total of approximately $33 million in discretionary grants through four juvenile reentry grant programs and three substance abuse grant programs. Specifically, in the area of reentry, OJJDP awarded a total of $25.4 million to 38 grantees under 4 programs, and in the area of substance abuse, OJJDP awarded a total of $7.6 million to 15 grantees under 3 programs. See appendix IV for more detail on funding for these reentry and substance abuse programs. Of the five reentry program types we reviewed, reentry experts reported that there is evidence from available research that cognitive behavioral therapy reduces recidivism. While experts cited a lack of evidence demonstrating that wraparound/case management, aftercare, and vocational/job training were effective in achieving results, such as a reduction in recidivism, they generally provided positive views on the potential results of these three types of programs, based on their own experience or knowledge of them. Similarly, of the five substance abuse program types we reviewed, juvenile substance abuse experts reported that there is evidence from available research that cognitive behavioral therapy along with family therapy are effective at reducing recidivism and show successful results at reducing substance abuse. However, expert opinions regarding other substance abuse program types, such as drug courts, mentoring, and wraparound/case management, were mixed, with experts stating that these program types could be effective, they were ineffective, or there was not enough evidence to determine effectiveness. Furthermore, both reentry and substance abuse experts cited studies indicating that cognitive behavioral therapy and family therapy programs are cost beneficial; however, the experts cited limited evidence for determining the costs and benefits of the other programs we reviewed. Eleven of the 12 experts we interviewed who provided comments based on their knowledge and experience with cognitive behavioral therapy stated that evidence from available research shows that these programs can be effective at reducing recidivism. Cognitive behavioral therapy intervention programs are designed to identify and provide juveniles with the skills to change thoughts and behaviors that contribute to their problems. The underlying principle of these programs is that thoughts affect emotions, which then influence behaviors. These intervention programs combine two kinds of psychotherapy—cognitive therapy and behavioral therapy. The strategies of cognitive behavioral therapy have been used to, among other things, prevent the start of a problem behavior—such as violence and criminal activity—or stop the problem behavior from continuing. A juvenile offender can receive this type of intervention program after sentencing, throughout incarceration, or after returning to the community. For example, a cognitive behavioral therapy intervention program may provide individual and family services to treat a juvenile offender who has mental health and substance abuse issues. The treatment can occur during the juvenile’s transition from incarceration back into the community and help the juvenile lower the risk of recidivism, connect the family with appropriate community support, assist the juvenile in abstaining from drugs, and improve the mental health of the juvenile. Based on their assessment of the available research, these 11 experts stated that cognitive behavioral therapy programs have been shown to be effective. Experts identified two meta-analyses of cognitive behavioral therapy programs that demonstrated effectiveness. One such study concluded that effective cognitive behavioral therapy programs are characterized by the low proportion of juveniles who dropped out of the program, as well as the close monitoring of the quality of the treatment and adequate training for the providers. In addition, this same study also found that 12 months after treatment, the likelihood of a juvenile who received cognitive behavioral therapy not recidivating was about one and a half times greater than for a juvenile who did not receive the therapy. This study also reported that the effects of cognitive behavioral therapy were greater for offenders who had a higher risk of recidivism than those with a lower risk. Specifically, the best results, in terms of recidivism reductions, occurred when high-risk offenders received more intensive treatment that targeted criminal thinking patterns. A second study also reported that among therapeutic interventions, such as skill building, cognitive behavioral therapy was most effective at reducing recidivism. The 12th expert stated that the particular cognitive behavioral therapy intervention program he was using—aggression replacement training®— had not been evaluated at his particular program site, so he could not draw conclusions as to its effectiveness. Despite having generally positive views on the results of wraparound/case management, aftercare, and vocational/job training programs based on their experience or knowledge of these programs, reentry experts reported a lack of evaluations that show conclusive evidence about the effectiveness of these programs. Specifically, of the nine experts who provided comments on wraparound/case management programs, eight offered positive opinions about these programs. For example, two of these experts commented that wraparound/case management can be successful at reducing recidivism, depending on the quality and availability of services provided to juveniles. However, two of these eight experts also stated that there was a lack of evaluations demonstrating the effectiveness of wraparound/case management programs. One of these experts pointed us to a study on a specific wraparound/case management intervention program, Wraparound Milwaukee, that showed potentially promising results related to a reduction in recidivism rates for juvenile offenders. However, another expert cautioned that initial evaluations of wraparound/case management programs did not conclusively demonstrate the effectiveness of wraparound/case management programs. Finally, the ninth expert stated that in her experience, wraparound/case management interventions are not effective because, for example, juveniles are placed into these interventions based on the availability of program staff and resources rather than program services being tailored to the individual needs of each juvenile. In addition, 7 of the 15 experts who commented about aftercare programs opined that aftercare interventions are important reentry programs, in part, because they link a juvenile with the community and provide regular contact with a caseworker. However, 6 other experts said there was inconclusive evidence to determine whether these programs can be effective in achieving results. Three of these experts based their opinions on an evaluation of the Intensive Aftercare Program that showed inconclusive results about program effectiveness. Specifically, the study found no evidence that the program had its intended impact of reducing recidivism among juveniles who were released back into the community under supervision in the three states that piloted the program. However, the evaluation did find that the three states that implemented the Intensive Aftercare Program model successfully incorporated most of its core features, which prepared juveniles to transition back into the community. For instance, these states created new Intensive Aftercare Programs— specific treatment programs that among other things, prepared juveniles for increased responsibility in the community, facilitated interaction with the community, and worked with the juveniles’ schools and families. The state programs had a large percentage of juveniles involved in various treatment services. Despite the inconclusive results of the study, 1 expert credited the aftercare program model with addressing the issue of juveniles interacting with multiple probation officers throughout the entire reentry process because aftercare programs, in general, assign one probation officer to a juvenile as a consistent point of contact. The remaining 2 of 15 experts opined that aftercare intervention programs had not been shown to be effective at achieving desired results because, for example, the treatment a juvenile receives depends on the services available in the community. With respect to vocational/job training programs, 10 of the 11 reentry experts who commented on these programs expressed positive opinions about the programs’ potential outcomes but noted that there had been limited research conducted to demonstrate their effectiveness. Specifically, experts noted that vocational/job training programs could be beneficial if they were applied to older juveniles and if they led to those juveniles getting jobs. The remaining expert said there is little evidence to demonstrate the effectiveness of these intervention programs. For a more detailed description of reentry experts’ opinions about these program types, see appendix V. All of the 13 substance abuse experts we interviewed stated that based on available research, cognitive behavioral therapy effectively reduces recidivism and has demonstrated success at reducing substance abuse. Experts cited six studies to support their opinions, two of which were the same studies cited by reentry experts that demonstrate that cognitive behavioral therapy is effective at reducing recidivism. Two of the substance abuse experts noted that few studies have been conducted to determine whether an intervention program is effective at specifically reducing substance abuse. However, 3 experts also noted that within the last decade, newly emerging research has shown promising results with respect to cognitive behavioral therapy program types reducing substance abuse. For example, these experts pointed us to three studies that report that juveniles who participated in these programs showed reductions in marijuana use. Twelve of the 13 substance abuse experts we interviewed who provided comments based on their knowledge and experience stated that family therapy programs are effective at reducing recidivism or decreasing substance use. Family therapy uses trained therapists to treat juvenile offenders with substance abuse problems by including families of juveniles in the treatment and focusing on improving communication and interactions among family members and improving overall relationships between juveniles and their families. This type of therapy focuses on the family as it is the primary and sometimes only source for emotional support, moral guidance, and self-esteem for juveniles. Family habits, such as failing to set clear expectations for children’s behavior, poor monitoring and supervision, and severe and inconsistent discipline can often lead to juveniles engaging in delinquency and substance abuse, according to OJJDP. For example, family drug use often results in adolescent drug use. Based on their assessment of the available research, these 12 experts provided positive opinions about the effectiveness of family therapy, and 7 of these experts cited 9 studies that support their opinions. These studies demonstrated, for example, that multisystemic therapy—a family therapy intervention program that helps parents identify strengths and develop natural support systems (e.g., extended family, neighbors, friends, and church members)—is an effective intervention program for reducing recidivism and substance use because, for example, juveniles who participated in multisystemic therapy programs engaged in significantly less criminal activity than did nonparticipants. Specifically, multisystemic therapy participants had fewer average convictions per year for violent crimes than those juveniles who did not participate in the program. Additionally, analyses of drug tests demonstrated significantly higher rates of drug abstinence for program participants than for nonparticipants. One study also showed that participants in functional family therapy, another family therapy intervention program, had 50 percent reductions in substance use as compared to juveniles who did not participate in the program. According to the 10 experts who commented on drug courts, 5 stated that there is a lack of evidence to determine program effectiveness, while another expert stated that drug courts are ineffective types of programs because they expose first-time offenders to more serious drug users. The remaining 4 experts stated that drug courts can be effective if, for example, they are combined with other effective intervention programs, such as multisystemic therapy. Similarly, according to the 8 experts who commented on mentoring, 1 stated that there are too few evaluations to determine effectiveness, while 4 stated that mentoring programs alone are ineffective or unsuccessful at achieving desired results and that mentoring intervention programs are more effective at preventing at-risk juveniles from engaging in delinquent behavior. However, 3 experts thought mentoring intervention programs could be effective if the programs adhere to certain factors that have been evaluated and shown to be effective, such as the mentor being properly trained. Finally, experts also had mixed views on the effect of wraparound/case management types of programs. Of the 11 experts who commented on these programs, 7 experts stated that wraparound/case management is effective or can be effective if, for example, wraparound/case management is combined with another intervention program that has been evaluated and has shown to be effective, such as cognitive behavior therapy. Conversely, 4 experts either stated that these programs are ineffective because, for example, the intervention programs lack follow-through as there are no consequences if a juvenile does not show up for treatment, or there is not sufficient evidence to determine effectiveness. For a more detailed description of substance abuse experts’ opinions about these program types, see appendix VI. While program evaluations establish if a program is effective in producing its intended results or effects—such as a reduction in recidivism—cost- benefit analyses use program evaluation to determine if the dollar value of a program’s benefits exceeds the costs to deliver the program. For example, if a program evaluation shows that an intervention program reduces the number of offenses committed by juveniles from three to one, a cost-benefit analysis would first determine a dollar value for each of the offenses. Then, the cost-benefit analysis would estimate whether the savings of going from three offenses to one offense is more or less costly than the amount of money required to deliver the intervention program, as compared to an alternative program the juvenile would have received. The intervention may not always be more expensive than the alternative. For example, if the alternative is incarceration, the intervention program may be less expensive—meaning that the intervention program can be cost beneficial even if it does not result in a reduction of offenses. By applying the same cost-benefit analysis techniques to evaluations of different program types, decision makers can make comparisons among alternatives and determine which program types offer the greatest benefits for the least cost. The results of a cost-benefit analysis are often represented as a net benefit, meaning total benefits minus total cost. Of the 26 reentry and substance abuse experts we interviewed, 19 provided information related to the cost benefits of the reentry and substance abuse program types in our review. These 19 experts identified five cost-benefit analyses of juvenile justice programs consisting of four meta-analyses and one systematic review. The studies demonstrate that various cognitive behavioral therapy and family therapy intervention programs are cost beneficial because they are effective at reducing crime and are expected to produce more benefits than costs compared to the alternative. For example, in one study, the authors reviewed several program interventions that fall into the family therapy program type, such as multisystemic therapy and multidimensional treatment foster care. The authors analyzed three program evaluations of multidimensional treatment foster care and found that this intervention can be expected to reduce crime outcomes by 22 percent. Based on this reduction in crime, the authors of the study predict that the intervention provides about $80,000 worth of benefits per participant. This dollar value reflects the savings per participant that result from a decrease in criminal activity, including savings to crime victims, police and sheriff’s office costs, and juvenile detention costs, among others. The four studies cited by the experts show mixed or inconclusive results for drug courts, vocational/job training, and mentoring program types. For example, one study found that juvenile drug courts are cost beneficial because they are expected to have a net benefit of $4,622 per program participant. The other studies could not determine drug courts’ cost- effectiveness because they either did not include program evaluations of drug court programs or they found mixed results in the program evaluations analyzed and therefore could not determine the net benefits. In addition, two studies found that there are too few evaluations of vocational/job training or mentoring in juvenile justice programs to calculate if the benefits of these program types outweigh the costs. The remaining program types in our review—wraparound/case management, aftercare, and reentry courts—were not analyzed in these studies. Table 3 presents a summary of these studies. In addition, seven experts also commented on reentry and substance abuse programs that were not included in the cited studies. For example, three experts opined that wraparound/case management programs may eventually be proven to be cost beneficial, based on preliminary research and evaluations. For example, one expert cited an unpublished study of a wraparound program pilot project that showed that recidivism of program participants was low, and that program costs were approximately 60 percent of the costs of incarcerating juveniles. Additionally, although experts did not cite cost-benefit analyses of aftercare program types, four reentry experts stated that such programs could be cost beneficial if the intervention program being delivered is effective because the cost of incarceration is so high. Three experts we interviewed stressed that even though some intervention programs that have been shown to be effective are expensive, if they reduce recidivism, they might be cost beneficial because of the high cost of incarcerating juveniles. Consistent with the JJDPA, OJJDP has several efforts under way to disseminate information about effective juvenile justice programs. Two of these efforts—NTTAC and the Model Programs Guide—provide information about effective programs for a range of juvenile justice issues, including reentry and substance abuse issues. Consistent with federal guidelines for ensuring the utility of information, OJJDP has established mechanisms to ensure that the information provided through its training and technical assistance efforts meets the needs of the juvenile justice field. However, OJJDP could better ensure the usefulness of the information it disseminates through the Model Programs Guide by having a mechanism in place to solicit regular feedback specifically related to the guide from the juvenile justice field. According to the JJDPA, OJJDP is authorized, but is not required, to provide information about juvenile justice issues and programs and to provide training and technical assistance to help the juvenile justice field implement and replicate such programs. In accordance with this authority and its mission to support states and communities in their efforts to develop and implement effective juvenile justice programs, OJJDP disseminates information related to these programs through a range of efforts, from those designed to meet the needs of the juvenile justice field as a whole to those that focus on effective programs in a specific issue area, such as gang prevention or girls’ delinquency. OJJDP distributes the broadest range of information on juvenile justice topics through the Juvenile Justice Clearinghouse (Clearinghouse). Through its services, the Clearinghouse offers, among other things, the latest research findings and statistics, publications on juvenile justice issues and programs, announcements of funding opportunities, and other resources prepared by a variety of researchers in juvenile justice. As part of its efforts, the Clearinghouse responds to requests for information about effective programs by directing users to OJJDP efforts that develop and disseminate information about effective programs, such as NTTAC and the Model Programs Guide. Thus, we focused on NTTAC and the Model Programs Guide because they provide information about effective programs across the range of issue areas in which OJJDP is involved, including reentry and substance abuse programs. OJJDP also disseminates information about effective programs in specific issue areas through various centers, such as the National Youth Gang Center and the Underage Drinking Enforcement Center. For a more detailed discussion of these centers and other information dissemination efforts that focus on specific issues, see appendix VII. NTTAC was established in 1995, in part to provide information about effective juvenile justice programs—such as programs that address issues related to reentry and substance abuse—through its training and technical assistance efforts. According to OJJDP, NTTAC works to promote the use of effective programs in the field through training and technical assistance programs. Additionally, NTTAC develops training materials and resources, and customizes the information included in its curricula in an effort to best meet the needs of its training and technical assistance recipients. In terms of its efforts specifically related to program effectiveness, NTTAC provides training and technical assistance for members of the juvenile justice field on how to develop and sustain effective programs, and to help the field understand programs that are effective for various juvenile populations, such as juveniles with mental health issues or female offenders. The Model Programs Guide is an online database that contains summary information about approximately 200 juvenile justice programs, from prevention programs to reentry programs. It is designed to help practitioners and communities identify and implement prevention and intervention programs that have been evaluated and have been shown to be effective. Programs in the Model Programs Guide may focus on a range of issues, including delinquency, violence, youth gang involvement, substance abuse, or academic issues, and can include, but are not limited to, delinquency prevention, community service, drug courts, or family therapy. To be included in the Model Programs Guide, programs are reviewed and rated along several dimensions, including such factors as whether an evaluation of the program established a causal association between the treatment and the outcome. Users can search the Model Programs Guide to find programs that meet their specific needs. For example, users can look for a program that has been shown to be effective for juveniles with substance abuse problems who are first-time offenders, or they can search for a program that has been shown to be effective for juveniles involved in gang activities who are reentering the community. In accordance with federal guidelines from OJP and prior GAO work, OJJDP has mechanisms in place to regularly conduct evaluations and is currently conducting a needs assessment to ensure the usefulness of the information provided by its training and technical assistance efforts. However, OJJDP could better ensure the utility of the information provided by the Model Programs Guide by establishing a mechanism to solicit regular feedback from the juvenile justice field. We have previously reported on the importance of regularly soliciting feedback to assess user needs and satisfaction. Specifically, we have reported that without feedback, an agency lacks valuable information from its users and is hindered in its ability to make improvements to information products that are relevant to users. Additionally, OJP has published Information Quality Guidelines for its bureaus, including OJJDP, that highlight the importance of ensuring the utility of information to be disseminated to the public by continuously monitoring information needs, among other things. OJJDP has mechanisms in place to regularly assess the usefulness of the information disseminated by NTTAC to ensure that it meets the needs of the juvenile justice field. Specifically, OJJDP has established an evaluation process for NTTAC that is designed to collect the data necessary to regularly assess the outcome and impact of the training and technical assistance NTTAC provides to improve the quality of the information it disseminates. Officials at NTTAC explained that after every training or technical assistance event, all participants are given an evaluation form to complete. This form is intended to capture feedback from participants about the quality of the event, as well as feedback about the referrals and resources NTTAC provides. Other evaluation forms are also available on NTTAC’s Web site so that users can provide feedback about NTTAC’s services, as well as feedback about the utility of the Web site. NTTAC then follows up with a sample of these respondents for more in-depth feedback. According to NTTAC officials, NTTAC analyzes the data collected from these forms and then provides them to OJJDP. These officials stated that OJJDP receives this information on at least a quarterly basis, and uses the information to make changes to existing curricula and guide future curriculum development, among other things. In accordance with OJP guidelines and prior GAO work that highlights the importance of assessing user needs, these evaluation efforts allow OJJDP to regularly monitor the usefulness of the information it disseminates in order to develop or modify its information products. In addition, OJJDP is conducting a needs assessment to solicit additional information about the utility of the information it disseminates through NTTAC’s training and technical assistance efforts. NTTAC is administering the needs assessment and, according to NTTAC officials, it is designed to determine the training and technical assistance that would be most helpful to the field. Specifically, the needs assessment is soliciting feedback from members of the juvenile justice field about OJJDP’s existing efforts. It is also requesting information regarding issues of interest to the field, any current training or technical assistance needs, and the specific challenges that the juvenile justice field is facing in its work. OJJDP officials stated that they intend to use the results of the needs assessment to influence the development of training and technical assistance activities and curricula and the content of national conferences and workshops. OJJDP’s efforts to conduct evaluations and a needs assessment are consistent with comments we received from our expert interviews. We asked all 26 of the juvenile reentry and substance abuse experts we interviewed to comment on OJJDP’s overall efforts to disseminate information about effective programs to the juvenile justice field. Thirteen experts provided responses, and while they did not comment specifically on NTTAC or the Model Programs Guide, they commented on the utility of the information OJJDP provides in general about effective programs. Ten of 13 experts had negative opinions of how useful the information OJJDP disseminates is to members of the juvenile justice field. For example, 1 expert stated that practitioners often do not have the time to read research data disseminated by OJJDP, which prevents them from being able to effectively use it in their work. The expert added that it would be more useful if OJJDP disseminated information that was practical and could be applied in the field. In addition, 2 of these 10 experts suggested that it would be helpful for OJJDP to obtain feedback from members of the juvenile justice field about what types of information they would find useful. Thus, OJJDP’s needs assessment should help to address this concern. The remaining 3 experts who commented on OJJDP’s information dissemination efforts had generally positive opinions, stating that the information is useful to researchers and practitioners. With respect to the Model Programs Guide, although OJJDP has ad hoc mechanisms in place to solicit feedback about the information it provides, it does not solicit this feedback on a regular basis or use feedback to help ensure that the information disseminated by the Model Programs Guide is useful to the field, in accordance with federal guidelines. For example, the Model Programs Guide’s Program Director gives several presentations about the guide each year at juvenile justice conferences. Officials who operate the Model Programs Guide stated that following these presentations, they request verbal feedback from participants. Officials also stated that they regularly receive unsolicited feedback through the e- mail address that is listed on the Model Programs Guide’s Web site, which they respond to on a case-by-case basis. Additionally, officials said that they collect feedback about the Model Programs Guide through an annual e-mail survey that is sent to the program points of contact listed on the guide to obtain updated program information. Although these efforts to solicit feedback about the Model Programs Guide provide OJJDP with some information from its users, according to OJJDP officials, because the guide does not have a systematic feedback mechanism, information received cannot be analyzed on an aggregate level in order to regularly assess how the juvenile justice field views the utility of the information provided by the Model Programs Guide. Further, while the annual e-mail survey can help OJJDP confirm that the program information featured in the Model Programs Guide is accurate, it does not provide information about whether the guide is useful to the field as a whole since OJJDP sends the survey’s request for comments about the Model Programs Guide to a portion of the juvenile justice field whose programs are already published in the guide, which means that the comments it receives about the Model Programs Guide do not necessarily reflect the opinions of the juvenile justice field as a whole. OJJDP officials agreed that they had not established a systematic mechanism to obtain feedback from the field regarding the usefulness of the Model Programs Guide and recognized that such a mechanism would be useful to have in place. Officials also stated that NTTAC’s needs assessment might be used as a model to build in more consistent mechanisms for feedback for the office’s broader efforts. Because NTTAC uses evaluations and is taking steps to conduct a needs assessment to monitor the information needs of the juvenile justice field, OJJDP is in a better position to help ensure that the information provided by NTTAC is useful to the juvenile justice field. Recognizing that, although there is a cost associated with gathering and analyzing feedback data, establishing a cost-effective mechanism to regularly solicit feedback about the Model Programs Guide should provide OJJDP with the information necessary to assess whether the information provided by this tool is useful to the juvenile justice field. OJJDP has articulated research and evaluation goals to support its mission of promoting effective programs and improving the juvenile justice system. According to OJJDP, one of its three main goals is to promote improvements in juvenile justice and facilitate the most effective allocation of resources by conducting research to understand how the juvenile justice system works in serving children and families. Under the JJDPA, OJJDP is required to publish an annual program plan that describes planned activities that are under accounts authorized for research and evaluation activities and that demonstrate promising initiatives, among other things. This plan is required to be published annually in the Federal Register for public comment, and is to describe the activities the Administrator intends to carry out under parts D and E, the appropriations accounts that in general are available for research and the development of new programs and initiatives, respectively. Specifically, according to the JJDPA, the Administrator must take into account the public comments received during the 45-day period and develop and publish a final plan before December 31 of each fiscal year, describing the particular activities that the Administrator intends to carry out under parts D and E. While OJJDP has not published an annual program plan since 2002, it issued a proposed plan in the Federal Register to solicit public comment in December 2009. OJJDP aims to publish the final version once public comments are incorporated, in accordance with the JJDPA’s requirements. Although the annual program plan is required to describe the particular activities the Administrator intends to carry out under parts D and E of the JJDPA, the proposed program plan includes the office’s priorities with respect to all discretionary funding, including its research and evaluation efforts. According to the Acting Administrator, this will, in part, provide complete transparency for all such funding. According to OJJDP, the development and publication of the annual program plan is a first step that will lead to a comprehensive evaluation plan as the annual program plan outlines the agency’s overall research and evaluation goals. Additionally, the Office of Management and Budget’s (OMB) fiscal year 2006 Program Assessment Rating Tool found that juvenile justice programs would benefit from evaluations of their effectiveness but noted that such evaluations are difficult and expensive to do. As a result, OMB recommended that OJP develop a comprehensive evaluation plan for juvenile justice programs to obtain better information about the programs’ impacts. Although OMB’s recommendation was directed at OJP, OJP and OJJDP officials stated that because OJJDP is the office within OJP required to conduct juvenile justice evaluations, it is that office’s responsibility to develop this evaluation plan. In addition to the above requirement and recommendation, federal guidelines stipulate the importance of developing a plan to achieve agency goals. As established in the standard practices for program management, specific goals of an agency must be conceptualized and defined in a plan. Specifically, this plan is to contain a description or road map of how the goals and objectives are to be achieved, including identifying the needed resources and target milestones or time frames for achieving desired results. We have also reported on the importance of planning research and evaluation efforts, in part to ensure that goals are met and resources are used effectively. OJJDP’s Research Coordinator stated that such a road map or plan for conducting research and evaluation would help better target the agency’s research and evaluation efforts toward achieving their goals. However, from 2006 to 2009, OJJDP had not developed such a plan, primarily because of resource constraints. According to this official, in lieu of having a comprehensive evaluation plan in place to guide its research and evaluation efforts, the office’s efforts are influenced by a number of factors, including whether Congress directs the agency to conduct research in a particular area or whether ideas are generated internally by staff or externally by members of the juvenile justice field. For example, OJJDP staff responsible for the mentoring area may generate ideas about how available research funds could be used, for example, by evaluating a particular type of mentoring program. In addition, the office may receive recommendations from the Federal Advisory Council on Juvenile Justice or feedback from others in the juvenile justice field. While these factors have influenced OJJDP’s research and evaluation efforts, they have not provided a framework for helping the office meet its research and evaluation goals. Therefore, once the program plan is finalized, OJJDP intends to develop a comprehensive evaluation plan in accordance with OMB recommendations to provide direction and priorities for its research and evaluation efforts. According to the Acting Administrator, OJJDP intends to use this comprehensive evaluation plan to better align and target available discretionary funds toward achieving its research and evaluation goals. In addition to having a road map to help ensure it meets its goals, it is important for OJJDP to have a comprehensive plan that lays out how the office will evaluate its juvenile justice programs. Such a plan would help to ensure that its limited resources are being used effectively. This is important because OJJDP does not currently receive dedicated funding for research and instead must make trade-off decisions to balance funding to implement programs with funding to evaluate which programs are effective. The office has not received dedicated research funding since fiscal year 2005 when it received $10 million for its part D appropriations account—the appropriations account specifically available for research and evaluation efforts. Without part D funding, OJJDP has relied on funds it has set aside from its other appropriation accounts to fund its research and evaluation activities. Specifically, as shown in table 4, OJJDP is authorized by the appropriations act to set aside up to 10 percent of certain appropriations accounts for its research and evaluation efforts. In fiscal year 2008, the last year for which set-aside funding data are available, the appropriations act authorized OJJDP to set aside over $23 million for research and evaluation. However, according to OJJDP, the office set aside approximately $11 million. OJJDP officials stated that this was, in part, because the JJDPA requires and the agency wants to ensure that sufficient funds are available to the states for grant programs. In addition, officials explained that some of OJJDP’s accounts are transferred to other program offices, such as the Office of Community Oriented Policing Services, so research funds are not deducted from those accounts. Of the over $11 million that OJJDP did set aside, officials reported that the office used nearly $8 million (or 70 percent) for research and evaluation. Table 4 shows the amounts authorized to be set aside by the annual appropriations act, as well as the amounts actually set aside and used by OJJDP. Additionally, all of the set-asides from these four accounts must be used for research, evaluation, and statistics activities designed to benefit the juvenile justice issues that the accounts specify. For example, set-aside funds from the youth mentoring grant appropriation account must be used to research or evaluate mentoring programs. For other accounts, OJJDP can elect to fund research and evaluation efforts in a number of different areas. For example, under Juvenile Accountability Block Grants, OJJDP provides funds to states and units of local government to strengthen the juvenile justice system. The states can use these funds for 17 different purpose areas, including establishing programs to help the successful reentry of juvenile offenders from state and local custody in the community or for hiring staff or developing training programs for detention and corrections. Consequently, there are limits on the amount of funds OJJDP can divert to research and evaluation and on its discretion over how to use of some of these funds. In fiscal year 2008, the appropriation act allowed OJJDP to set aside more than $23 million that could be dedicated to research and evaluation efforts on numerous eligible programs. Because OJJDP has to decide how to split set aside funds between supporting state and local program implementation and program evaluation, in accordance with federal guidelines, a comprehensive evaluation plan that in part identifies its funding resources could help OJJDP make this determination. According to OJJDP, the office has spent several years considering developing a plan to provide a road map for how it would meet its research and evaluation goals. However, officials stated that it has been difficult to complete a comprehensive evaluation plan to fulfill OMB’s Program Assessment Rating Tool recommendation because they have not had the resources available—that is, funding and staffing—to develop the plan. Specifically, because funds have not been appropriated for part D since fiscal year 2005, OJJDP has not had a dedicated source of funding that could be used to develop a comprehensive evaluation plan or to fund the research identified by such a plan. Additionally, in 2003, OJJDP reorganized its divisions and, as part of this, dissolved its research division, as well as the training and information dissemination units. According to OJJDP, the intention of the former Administrator who implemented this reorganization was to better integrate these functions throughout the agency. OJJDP officials stated that those staff who were dedicated to research and evaluation work were reassigned to other divisions. Although some of these staff retained the research projects they had at the time, they also assumed new grant management duties. Also, over the past 8 fiscal years, OJJDP’s overall authorized staffing level has decreased from 95 to 76. Specifically, those staff dedicated to research and evaluation decreased from 10 in fiscal year 2002 to 3.5 in fiscal year 2009. According to OJJDP officials, the reduction in staff who were dedicated to research and evaluation has strained the staffing resources that could be used for developing a comprehensive evaluation plan. Although OJJDP cited funding and staffing constraints, the Acting Administrator has made developing a comprehensive evaluation plan a priority and the office is committed to moving forward with developing this plan. Following through with its planning efforts will help OJJDP to meet its research and evaluation goals and better ensure that its resources are being used effectively as stipulated by federal guidelines. As the juvenile justice field—including states and local communities— works to implement programs to lower juvenile recidivism rates and address juvenile substance abuse, it is important that the field has information about which programs have been shown to be effective through program evaluations. The importance of OJJDP’s goal to research and evaluate programs to improve juvenile delinquency underscores the need for a comprehensive plan to evaluate juvenile justice programs, one that identifies resources to be committed to its research and evaluation efforts and outlines the details of how OJJDP will accomplish its research and evaluation goals. OJJDP efforts to publish a fiscal year 2010 program plan in December are positive steps in developing the comprehensive evaluation plan that officials have said they are committed to developing. Having such a plan will provide OJJDP with a road map to help ensure that it meets its research and evaluation goals, uses its limited resources effectively, and contributes to identifying effective programs to help support states and localities. With respect to OJJDP’s efforts to disseminate information about effective programs, NTTAC’s efforts to regularly assess the needs for the information it is disseminating through training and technical assistance are important to helping OJJDP assess the utility of its efforts and make appropriate improvements. We also recognize that OJJDP’s efforts to conduct a needs assessment could help provide important information to NTTAC that can be used in conjunction with its evaluation efforts. Consistent with federal guidelines from OJP and prior GAO reports, assessing the utility of the information disseminated through OJJDP’s Model Programs Guide is also critical to ensuring that such information meets the needs of the juvenile justice field so the field can better implement effective programs. Having a mechanism in place to regularly solicit feedback from the field about the usefulness of the Model Programs Guide would better position OJJDP to assess whether the information it is disseminating through the guide on effective programs regularly meets the needs of its users. To help ensure that OJJDP’s Model Programs Guide is regularly meeting user needs and providing the most helpful information on effective programs, consistent with federal guidelines, we recommend the Administrator of OJJDP develop a cost-effective mechanism for regularly soliciting and incorporating feedback from the juvenile justice field on the usefulness of the information provided in its Model Programs Guide. We provided a copy of this report to the Attorney General for review and comment. On December 3, 2009, OJP provided written comments, which are reprinted in appendix VIII. OJP stated that it agreed with our recommendation and intends to develop a mechanism for regularly soliciting and incorporating feedback from the juvenile justice field on the usefulness of the information provided in its Model Programs Guide by March 31, 2010. OJP also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Attorney General, selected congressional committees, and other interested parties. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact Eileen Larence at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IX. For the purposes of our review, we selected a total of 26 experts to interview—13 of whom had expertise related to juvenile reentry programs, 7 of whom had expertise related to juvenile substance abuse programs, and 6 of whom had both juvenile reentry and substance abuse program expertise. See table 5 for a list of these experts. Of the 26 reentry and substance abuse experts we interviewed, 22 experts—as well as research we reviewed—identified several factors that can help programs achieve intended outcomes, that is, be effective. The following factors, while not an exhaustive list of items for programs to consider when implementing juvenile justice intervention programs, were the most frequently cited by the experts we interviewed: maintaining fidelity to the program; selecting, training, and retaining qualified providers; conducting needs-based assessments to provide individualized treatment; and improving juvenile program participation by engaging and motivating juvenile and family involvement. While incorporating these factors into reentry or substance abuse programs does not guarantee that any particular intervention program will be successful, existing programs that have been evaluated and found to be effective have generally included these factors in their designs or implementation. According to 17 of the 22 experts, maintaining fidelity to the program as it was intended to be implemented can help programs achieve their intended objectives. This factor focuses on ensuring that core program services or intervention components are delivered as they were designed, that is, with fidelity. For example, for cognitive behavioral therapy, this would entail that core intervention components, such as cognitive and social skills training, were provided exactly as they were designed to each participant. According to one expert’s research, the degree to which an intervention program is delivered with fidelity is closely related to its effects on recidivism. Another expert concurred, stating that the more closely core program services or intervention components are implemented as they were designed, the more the intervention program will reduce recidivism rates. For example, one expert emphasized the importance of maintaining fidelity to the program when replicating the model within a specific community. In particular, another expert explained that some therapists tend to substitute their own preferred treatment techniques instead of using those prescribed by the intervention program, which can affect how effective a program is at reducing recidivism. This is particularly true if the intervention program being delivered is a program that has been evaluated and found to be effective. Furthermore, one of these experts stated that the specific model chosen has less of an effect on intended outcomes than the manner in which it is delivered. As another expert explained it, a weaker intervention—one that has not been evaluated and proven to be effective—may result in decreased recidivism rates, for example, if it is implemented as designed, while an effective intervention program that is implemented poorly may have little or no effect on intended outcomes. According to 19 of the 22 experts, selecting, training, and retaining qualified providers can help intervention programs achieve intended outcomes. For example, the quality of the services that cognitive behavioral therapy delivers depends, in part, on the provider’s ability and whether the provider has been trained on the specific therapies and components of the intervention program. Three of these experts noted that if providers are not appropriately trained in the therapy or intervention being implemented, they may not provide the program as it was intended, or as one of them noted, may substitute their own preferred treatment techniques for those prescribed by the intervention program. As a result, the providers’ failure to deliver the intervention program as it was designed reduces the ability of the program to achieve intended outcomes. Furthermore, many intervention programs utilize providers who have certain educational or clinical experience, such as having a background in mental health or being a licensed practitioner for the specific therapy being implemented. One of the 19 experts we interviewed also mentioned the importance of gaining the support of the juvenile justice community, as well as agencies’ program management, in the selection and training of providers. According to 18 of the 22 experts, by assessing a juvenile’s specific treatment needs, program providers can better design intervention programs that will be targeted to a juvenile’s individual situation. For example, 4 experts noted that this can help intervention programs achieve intended outcomes because individualized treatment is more likely to affect participants’ individual outcomes since it takes into account differences such as age, gender, culture, environment, and problem severity. One expert noted that individualized treatment ensures that juveniles do not receive unnecessary treatment, which in some instances may produce harmful results. According to this expert, providing juveniles who do not have substance abuse problems some programs, such as cognitive behavioral therapy, may lead to harmful results because these juveniles are exposed to others who have more serious addictions. Four experts also noted that using needs-based assessments to develop individualized treatment plans can be more cost beneficial than using standard treatment plans. Specifically, as one of these experts noted, this is because individualized treatment plans can help ensure that costly interventions are not provided to juveniles who do not need extensive services. In addition, 5 experts stated that conducting a risk-based assessment is important to determining which juveniles are at higher risk of reoffending in order to focus programming efforts on them. One of these experts cited a study that shows that targeting specific treatment needs of offenders is correlated with recidivism outcomes, that is, providing targeted treatment needs is generally related to lower recidivism. According to 16 of the 22 experts, engaging and motivating juvenile and family involvement can help to improve a juvenile’s program participation, thereby helping intervention programs to achieve intended outcomes. For example, 1 expert noted that successful programs rely on staff members to gain the trust of juvenile offenders. These programs also recognize that juveniles may experience program fatigue because they are participating in numerous programs and that motivation may become an issue. In addition, this expert noted that after being released into the community, juveniles and their families may not be motivated to participate in intervention programs. Additionally, research has shown that encouraging families to participate in the juvenile’s treatment program can reduce family risk factors for delinquency. Eleven experts also mentioned that motivating juvenile offenders and their families to participate can assist juveniles in successfully completing an intervention program. Two of these experts noted that by involving family members in treatment, some issues that may contribute to juvenile dropout rates, such as a history of traumatic stress and family members who also abuse substances, can be addressed within an intervention program. To prevent and respond to juvenile delinquency and help states improve their juvenile justice systems, the Office of Juvenile Justice and Delinquency Prevention (OJJDP) administers a wide variety of grants to states, territories, localities, and public and private organizations through formula, block, and discretionary grant programs. The office also provides training and technical assistance, produces and distributes publications and other products containing information about juvenile justice topics, and funds research and evaluation efforts. Table 6 shows funding by fiscal year from 2007 through 2009 for the appropriation accounts for juvenile justice programs. From fiscal years 2007 through 2009, OJJDP allocated approximately $33 million through discretionary grants to four juvenile reentry grant programs and three juvenile substance abuse programs. See table 7 for a description of these reentry and substance abuse programs and the amounts OJJDP awarded to grantees. Of the 19 reentry experts we interviewed, 9 had specific experience or knowledge related to wraparound/case management and 8 experts had positive comments about the effectiveness of this program. In general, wraparound/case management interventions involve making an array of individualized services and support networks available to juveniles, rather than requiring them to enroll in treatment programs that may not address individual needs. According to OJJDP, the goal of wraparound/case management programs is to keep delinquent juveniles at home and out of institutions whenever possible. The basic elements that constitute a wraparound program include, among other things, (1) a collaborative, community-based interagency team responsible for designing, implementing, and overseeing the intervention program in a given jurisdiction; (2) care coordinators who are responsible for helping juveniles create customized treatment programs, among other things; (3) juvenile and family teams consisting of family members and community members who work together to ensure the juvenile’s needs are met at home, at school, and in the community; and (4) a plan of care developed and updated by all members of the juvenile and family teams that identifies the juvenile’s strengths and weaknesses, targets specific goals such as improved performance in school, and outlines how to achieve them. Of these nine experts, eight provided positive opinions of the results of wraparound/case management intervention programs. For example, an expert commented about how in one specific wraparound intervention program, a single case manager is assigned to a juvenile and is responsible for determining the services the juvenile is to receive based on his or her specific needs, instead of enrolling the juvenile into a treatment program that may not be as beneficial for the juvenile. Two of these eight experts noted that there was a lack of evaluations demonstrating effectiveness of these intervention programs but pointed us to a study on a specific wraparound/case management intervention program, Wraparound Milwaukee, that showed potentially promising results related to a reduction in recidivism rates for juvenile offenders. However, these experts stressed that this study alone did not conclusively demonstrate the effectiveness of wraparound/case management programs. The ninth expert stated that in her experience, wraparound/case management interventions are not effective because, for example, a juvenile is placed into this intervention program based on the availability of program staff and resources rather than program services being tailored to the individual needs of the juvenile. Additionally, three of the nine experts cautioned that these intervention programs are difficult to implement because of such issues as a lack of quality services or low retention of juveniles and their families in the intervention being provided. Specifically, one of these experts noted that the quality of wraparound services can vary depending on a community’s resources. In addition, another expert emphasized the importance of obtaining buy-in from diverse service providers who may be used to working on their own, such as within the welfare, foster care, and public school systems. Of 19 reentry experts we interviewed, 15 had specific experience or knowledge related to aftercare programs and 6 cited a lack of conclusive evidence of effectiveness of the program type. Aftercare intervention programs are intended to prepare juvenile offenders to return to the community during the reentry process by focusing on the delivery of services and supervision that start while juveniles are incarcerated and continue after they return to their communities. Specifically, aftercare programs collaborate with the community and marshal its resources to help ensure that juvenile offenders receive services that address their individual needs, such as treatment for a substance abuse problem. These intervention programs focus on changing individual behavior thereby preventing further delinquency. For example, an aftercare program might incorporate the use of techniques from an intervention therapy, such as motivational enhancement therapy, to engage juvenile offenders in treatment and increase their commitment to change. Of these 15 experts, 7 offered positive opinions regarding aftercare intervention programs, based on their own experience or knowledge of the intervention programs. For example, 1 expert noted that if aftercare included intervention programs that were proven to be effective, used assessment tools that identified the individual needs of the juvenile, and implemented the therapies as they were designed, then the aftercare intervention program should be effective at reducing recidivism rates. Although these experts could not provide examples of studies that had been conducted to show evidence of the effectiveness of the intervention programs, all 7 of them agreed based on their own experience or knowledge that aftercare interventions are important reentry programs, in part, because they link the juvenile with his or her community and provide regular contact with a caseworker. Additionally, 3 of these 7 experts stated that aftercare could be effective depending on the intervention programs used and if they were delivered as intended. For example, they said that if aftercare includes intervention programs that have proven to be effective, such as cognitive behavioral therapy, and identifies the individual needs of the juvenile, the programs can reduce recidivism. However, 6 of the 15 reentry experts said there was inconclusive evidence to determine whether these programs can be effective in achieving results. Three of these experts based their opinions on an evaluation of the Intensive Aftercare Program that showed inconclusive results about program effectiveness. Specifically, the study found no evidence that the program had its intended impact of reducing recidivism among juveniles who were released back into the community under supervision in the three states that piloted the program. However, the evaluation did find that the three states that implemented the Intensive Aftercare Program model did successfully incorporate most of its core features, which prepared juveniles to transition back into the community. For instance, these states created new Intensive Aftercare Program–specific treatment programs that among other things, prepared juveniles for increased responsibility in the community, facilitated interaction with the community, and worked with the juveniles’ schools and families. The state programs had a large percentage of juveniles involved in various treatment services. Despite the inconclusive results of the study, one expert credited aftercare programs with addressing the issue of juveniles having to deal with different probation officers throughout the reentry process because, in general, aftercare programs assign one probation officer to a juvenile as a consistent point of contact. The evaluation also stated that in order for the general aftercare model to be effective, it must not only provide supervision and services after a juvenile’s release into the community, but also focus on preparing a juvenile for release. The remaining 2 experts opined that aftercare intervention programs had not been shown to be effective at achieving desired results because, for example, the treatment a juvenile receives depends on what services are actually available in the community. Of the 19 reentry experts we interviewed, 11 had specific experience or knowledge related to vocational or job training programs and indicated potential positive outcomes for these programs. According to OJJDP, providing juveniles with employment opportunities during reentry is a common strategy used to try to reduce future criminal behavior. Vocational or job training intervention programs are intended to improve juveniles’ social and educational functioning by, for example, increasing earnings, raising self-esteem, and instilling a positive work ethic. Juveniles can participate in vocational/job training intervention programs while they are incarcerated and after they return to the community. Of the 11 reentry experts, 10 of them had positive comments based on their experience or knowledge of the program type. Specifically, they said that vocational/job training programs were potentially beneficial, in part, if they were applied to older juveniles and if they led to those juveniles getting jobs. The remaining expert said that there is little evidence to demonstrate the effectiveness of these intervention programs. Reentry courts are specialized courts that manage the return of juvenile offenders to the community after they are released from residential facilities. The court manages reentry by using its authority to direct resources to support the offender’s return to the community and promote positive behavior, among other things. For example, a reentry court would oversee a juvenile’s release into the community by assigning a judge to meet with the juvenile once a month. The judge would actively engage the supervising authority, such as a parole officer, in assessing the juvenile’s progress. The judge would also oversee sanctions for violations as well as rewards, like early release from parole, for successful achievement of goals, such as successfully completing a cognitive behavioral therapy intervention program. Of the 19 reentry experts we interviewed, 2 provided comments related to reentry courts and had differing opinions on their effectiveness. One had a negative impression of the courts, stating that the reentry courts do not provide more to a juvenile than a probation officer would. The other commented that he considers concepts encompassed in reentry courts, such as intensity of supervision, to be a best practice when it comes to reentry programs. However, neither was aware of any evaluations of these types of courts. Of the 13 substance abuse experts we interviewed, 10 had specific experience or knowledge related to drug courts that resulted in mixed views of the effectiveness of this program type. Juvenile drug courts are specialized courts established within and supervised by juvenile courts to provide intervention programs, such as cognitive behavioral therapy or family therapy, for substance-abusing juveniles and their families. Juvenile offenders assigned to drug courts are identified by a juvenile court as having problems with alcohol or drugs. The drug court judge maintains close oversight of each case through frequent—often weekly—status hearings with the individuals involved. The judge both leads and works as a member of a team that can comprise representatives from juvenile justice, social services, school and vocational training programs, law enforcement, probation, the prosecution, and the defense. Together, the team determines how best to address the substance abuse and related problems of the juvenile and his or her family. Specifically, of these 10 experts, 5 experts described drug courts as having insufficient evidence to determine program effectiveness. For example, 2 experts mentioned that while some studies show drug courts reducing substance abuse while juveniles were under court supervision, the results did not last after juveniles were no longer being supervised by the courts. Another expert stated that since drug courts tend to be used for juveniles who have their first or second contact with the juvenile justice system, they are ineffective at achieving desired results because they expose these first-time offenders to peers who have more serious substance abuse addictions and therefore might influence them to continue to abuse substances. By contrast, the remaining 4 experts stated that drug courts can be effective at achieving desired results such as reducing substance abuse if, for example, the juvenile is sent to a community where there are intervention programs offered that have been evaluated and have been shown to be effective, such as cognitive behavioral therapy or family therapy intervention programs. One expert cited a study to support the opinion that drug courts supplemented with multisystemic therapy resulted in a decrease in substance abuse by juvenile offenders. Of the 13 substance abuse experts we interviewed, 8 had specific experience or knowledge related to mentoring intervention programs that resulted in mixed views of their effectiveness. Mentoring programs consist of a relationship between two or more people over a prolonged period of time, where an older, more experienced individual provides support and guidance to a juvenile. The goal of mentoring is for the juvenile to develop positive adult contact, thereby reducing risk factors, such as exposure to juveniles who use substances, while increasing positive factors, such as encouragement for abstaining from substance use. In the substance abuse field, juveniles in need of sobriety are teamed with older sponsors to serve as positive role models in helping them become sober. Of these eight experts, four stated that mentoring programs are ineffective or unsuccessful at achieving desired results, such as reducing substance abuse, and that these intervention programs are more effective at preventing at-risk juveniles from engaging in delinquent behavior. Also, one expert stated that there have been too few evaluations conducted on mentoring programs to make a general statement about the relative benefits of mentoring. Conversely, three experts stated that mentoring programs are effective or can be effective if, for example, mentors are trained or if mentoring is combined with another intervention program that has been evaluated and has been shown to be effective, such as multisystemic therapy. Of the 13 substance abuse experts we interviewed, 11 had specific experience or knowledge related to wraparound/case management intervention programs that resulted in mixed views of the program type. Of these 11 experts, 7 stated that wraparound/case management is effective or can be effective if, for example, it is combined with another intervention program that has been evaluated and has shown to be effective, such as multisystemic therapy. Although these experts had limited evidence demonstrating the effectiveness of wraparound/case management, 2 experts cited two studies that show potentially promising results related to a reduction in recidivism. For example, one study showed that juveniles in wraparound/case management receive a number of individualized services, such as mental health treatment for those juveniles who struggle with emotional issues. However, this study stressed that it is difficult to evaluate wraparound/case management in a controlled way since treatment plans are individualized for each juvenile. The other 4 experts stated that wraparound/case management intervention programs are ineffective because, for example, the intervention programs lack follow-through as there are no consequences if a juvenile does not show up for treatment, or there is not yet sufficient evidence to determine their effectiveness. In addition to the National Training and Technical Assistance Center and the Model Programs Guide, OJJDP disseminates information about effective programs through a variety of other efforts. Specifically, the office has developed mechanisms to disseminate information related to effective programs in specific issue areas, such as youth gang activity, disproportionate minority contact, and girls’ delinquency, as described in table 8. In addition to the contact named above, Mary Catherine Hult, Assistant Director; David Alexander; Elizabeth Blair; Ben Bolitzer; Carissa Bryant; Katherine Davis; Sean DeBlieck; Allyson Goldstein; Rebecca Guerrero; Jared Hermalin; Dawn Locke; Lisa Shibata; Janet Temko; and Delia Zee made key contributions to this report.
State juvenile justice systems face critical problems when it comes to juvenile delinquency issues such as reentry--when offenders return home from incarceration--and substance abuse. GAO was asked to review juvenile reentry and substance abuse program research and efforts by the Department of Justice's (DOJ) Office of Juvenile Justice and Delinquency Prevention (OJJDP) to provide information on effective programs (i.e., whether a program achieves its intended goal) and cost-beneficial programs (i.e., whether the benefits of programs exceeded their costs). This report addresses (1) expert opinion and available research on these types of reentry and substance abuse programs, (2) the extent to which OJJDP assesses its efforts to disseminate information on effective programs, and (3) OJJDP's plans to accomplish its research and evaluation goals. GAO, among other things, reviewed academic literature, and OJJDP's dissemination efforts and research goals. GAO also interviewed OJJDP officials and a nonprobability sample of 26 juvenile justice experts selected based on their experience with juvenile reentry and substance abuse issues. The majority of the juvenile justice reentry and substance abuse experts GAO interviewed cited evidence that shows cognitive behavioral therapy--programs that help individuals change their beliefs in order to change their behavior--and family therapy--programs that treat juveniles by focusing on improving communication with family members--are effective and cost beneficial when addressing reentry and substance abuse issues. For example, two juvenile reentry experts cited studies showing that 1 year after participating in a cognitive behavioral therapy program, participants were less likely to commit another offense than nonparticipants. Additionally, experts cited a study that reported that a family therapy program provides about $80,000 in savings per participant when accounting for savings from a decline in crime, such as the cost the police would have incurred. Most experts indicated that there was limited evidence on the effectiveness and cost benefits of reentry programs, such as aftercare--programs that assist juvenile offenders in returning to their communities during the reentry process--and substance abuse programs, such as drug courts--specialized courts that provide programs for substance-abusing juveniles and their families. GAO reviewed two OJJDP efforts that provide information on effective programs across the range of juvenile justice issues, the National Training and Technical Assistance Center (NTTAC) and the Model Programs Guide. OJJDP has mechanisms in place to regularly assess the utility of the information provided by NTTAC, but does not have such a mechanism for the guide. OJJDP ensures the utility of NTTAC's information through evaluations in accordance with federal guidelines that highlight the importance of regularly soliciting feedback from users. However, OJJDP could better ensure the utility of the information disseminated by the Model Programs Guide by having a mechanism in place to solicit regular feedback from members of the juvenile justice field--for example, program practitioners--that is specifically related to the guide. OJJDP has articulated research and evaluation goals to support its mission of improving the juvenile justice system and is developing plans to assist in meeting these goals. OJJDP is required under the Juvenile Justice and Delinquency Prevention Act, as amended, to publish an annual program plan that describes planned activities under accounts authorized for research and evaluation activities, among other things. Additionally, the Office of Management and Budget (OMB) recommended that OJJDP develop a comprehensive evaluation plan for juvenile justice programs. While OJJDP has not published an annual program plan since 2002, in December of 2009, it issued a proposed plan for public comment and aims to publish the final program plan once public comments are incorporated. Additionally, although the office has considered developing a comprehensive evaluation plan to address OMB recommendations, it had not previously done so because of a lack of resources. However, OJJDP is committed to developing a comprehensive evaluation plan once the program plan is finalized.
The U.S. bank regulatory structure is composed of several agencies at both the federal and state levels. The specific regulatory structure for a depository institution is determined by the type of charter the institution chooses. Depository institution charter types include commercial banks; S&Ls and savings banks; ILCs, also known as industrial banks; and credit unions. These charters can be obtained at the state and federal level, except for ILC charters, which are chartered only at the state level. State regulators help regulate the institutions they charter, but every institution that offers federal deposit insurance has a primary federal regulator (see table 1). To achieve their safety and soundness goals, bank regulators establish capital requirements, conduct onsite examinations and off-site monitoring to assess a bank’s financial condition, and monitor compliance with banking laws. Regulators also issue regulations, take enforcement actions, and close banks they determine to be insolvent. The BHC Act, as amended, contains a comprehensive framework for the supervision of bank holding companies and their nonbank subsidiaries. Bank holding companies are companies that own or control a bank, as defined in the BHC Act. Generally, any company that acquires control of an insured bank or bank holding company is required to register with the Federal Reserve as a bank holding company. The BHC Act defines ‘‘control’’ of an insured bank to include ownership or control of blocks of stock, the ability to elect a majority to the board of directors, or other management prerogative. Regulation under the BHC Act entails, among other things, consolidated supervision of the holding company by the Federal Reserve and, as previously discussed, restricts the activities of the holding company and its affiliates to those that are closely related to banking or, for qualified financial holding companies, activities that are financial in nature. In 1999, the Gramm-Leach-Bliley Act (GLBA) provided that a bank holding company may elect to become a financial holding company that can engage in a broader range of activities that the Federal Reserve determines to be financial in nature or incidental to such financial activity. For example, financial holding companies can engage in securities underwriting and dealing, but would be prohibited, for example, from selling unrelated products. The Home Owners’ Loan Act (HOLA), as amended sets forth the regulatory framework for S&L holding companies. S&Ls are often part of holding company structures. Like bank holding companies, S&L holding companies are subject to restrictions on the activities they conduct. HOLA permits S&L holding companies to conduct activities that the Federal Reserve Board has determined to be closely related to banking and activities permissible for financial holding companies. With the abolishment of OTS, the Federal Reserve is now the regulator for these holding companies. The Dodd-Frank Act made significant changes to the regulatory framework for S&L holding companies. The Dodd-Frank Act amends HOLA and the BHC Act to create similar requirements for both bank holding companies and S&L holding companies. For example, the Dodd-Frank Act amended both the BHC Act and HOLA to provide that the Federal Reserve Board has authority to impose capital requirements on depository institution holding companies by regulation or order, including bank holding companies and S&L holding companies. Before GLBA, commercial companies could own a single S&L without becoming subject to the activities restrictions that apply to S&L holding companies, and a number of commercial firms—such as General Electric; Macy’s, Inc.; and Nordstrom, Inc.—acquired S&Ls. While GLBA prohibited commercial activities for all S&L holding companies, it “grandfathered” the companies that already owned an S&L subsidiary— that is, it allowed these companies to keep the existing S&L and engage in commercial activities.the activities of grandfathered unitary S&L holding companies, but it amends HOLA to authorize the Federal Reserve to determine whether to require grandfathered unitary S&L holding companies engaged in nonfinancial activities to form intermediate holding companies. A grandfathered unitary S&L holding company will be required to establish an intermediate holding company if the Federal Reserve determines that the establishment of the intermediate holding company is necessary to appropriately supervise activities determined to be financial activities or to ensure that supervision by the Federal Reserve does not extend to the grandfathered unitary S&L holding company’s nonfinancial activities. The intermediate holding company would be subject to regulation as an S&L holding company and would be required to conduct all or a portion of the firm’s financial activities. The grandfathered unitary S&L holding company would be required to serve as a source of strength—that is, to provide financial assistance in the event of financial distress—to its subsidiary intermediate holding company. The Federal Reserve can also require certain reports from and undertake limited examinations of grandfathered unitary S&L holding companies. The Dodd-Frank Act generally does not restrict In addition, the Dodd-Frank Act requires the Federal Reserve to require all bank holding companies and S&L holding companies to serve as a source of strength to their subsidiary depository institutions. The Federal Reserve regulations governing S&L holding companies state that an S&L holding company “shall serve as a source of financial and managerial strength to its subsidiary savings associations.” The Dodd-Frank Act defines the term “source of strength” as the ability of a company that directly or indirectly owns or controls an insured depository institution to provide financial assistance in the event of financial distress of the insured institution. If an insured depository institution is not the subsidiary of a bank holding company or an S&L holding company, the appropriate federal regulator for the insured depository institution will require any company that directly or indirectly controls the insured depository institution to serve as a source of financial strength to the insured depository institution. The Dodd-Frank Act also made significant changes in the capital requirements applicable to certain bank holding companies and S&L holding companies. Depository institution holding companies will be subject to minimum leverage and risk-based capital requirements on a consolidated basis. These capital requirements must not be lower than the leverage and risk-based capital requirements applicable to insured depository institutions as in effect on July 21, 2010. In general, the new capital requirements will apply to S&L holding companies beginning July 21, 2015. The Federal Reserve’s bank holding company supervision manual explains that the holding company structure can adversely affect the financial condition of a bank subsidiary by exposing the bank to various types of risk, including market, operational, and reputational risks. For example, a holding company or an affiliate with poor risk management procedures may take excessive investment risks and fail. The failure of a holding company or affiliate can impair an insured institution’s access to financial markets. Moreover, a poorly managed bank holding company can initiate adverse intercompany transactions with the insured depository institution or impose excessive dividends on it. Adverse intercompany transactions may include charging the insured depository institution above-market prices for products or services, such as information technology services, provided by an affiliate or requiring the insured institution to purchase poor quality loans at inflated prices from an affiliate, thus placing the insured institution at greater risk of loss. Market risk is the risk to a banking organization’s financial condition resulting from adverse movements in market prices due to such factors as changing interest rates. Operational risk is the potential that inadequate information systems, operations problems, breaches in internal controls, or fraud will result in unexpected losses. From a practical standpoint, insured depository institutions may be susceptible to operational risk when they are dependent on or share in the products or services of a holding company or its subsidiaries, such as information technology services or credit card account servicing. If these entities ceased their operations, the insured institution could be adversely impacted. Reputational risk is the potential that negative publicity regarding an institution’s or affiliate’s business practices, whether true or not, could cause a decline in the customer base, costly litigation, or revenue reductions. Operational or reputational risk that impacts the holding company can also affect affiliates throughout the corporate structure. The BHC Act has established a consolidated supervisory framework for assessing the risks to a depository institution that could arise because of its affiliation with other entities in a holding company structure. Consolidated supervision of a bank holding company includes the parent company and its subsidiaries and allows the regulator to understand the organization’s structure, activities, resources, and risks and to address financial, managerial, operational, or other deficiencies before they pose a danger to the bank holding company’s subsidiary depository institutions. According to Federal Reserve Board Supervisory Letter SR 08-9, the agency has established capital standards for bank holding companies, helping to ensure that they maintain adequate capital to support groupwide activities, do not become excessively leveraged, and are able to serve as a source of strength to their depository institution subsidiaries. The Federal Reserve may generally examine holding companies and their nonbank subsidiaries, subject to some limitations, to assess the nature of the operations and financial condition of the holding company and its subsidiaries, the financial and operational risks within the holding company that may pose a threat to the safety and soundness of any depository institution subsidiary, and the systems for monitoring and controlling such risks, among other things. As the new regulator for S&L holding companies, the Federal Reserve has indicated that it intends, to the greatest extent possible taking into account any unique characteristics of S&L holding companies and the requirements of HOLA, to assess the condition, performance, and activities of S&L holding companies on a consolidated basis in a manner that is consistent with the Board’s established risk-based approach regarding bank holding company supervision. In contrast, FDIC and OCC do not have consolidated supervisory authority over the holding companies for the exempt banking institutions but do have full authority to apply to them the same federal regulatory safeguards that apply to all insured banks and S&Ls. For example, FDIC and OCC can impose conditions and examine agreements, dependencies, and transactions between exempted depository institutions and their holding companies (including affiliated entities) in order to better ensure the safety and soundness of those institutions. Furthermore, FDIC can terminate an exempted entity’s deposit insurance, enter into agreements during the acquisition of an insured entity, and take enforcement measures. In addition, FDIC possesses authority under Section 10 of the FDI Act to examine the affairs of any affiliate of any depository institution as may be necessary to disclose fully (1) the relationship between such depository institution and any such affiliate and (2) the effect of such relationship on the depository institution. Section 2 of the BHC Act exempts companies owning certain types of financial institutions from regulation under the BHC Act because the institutions they own are not defined as “banks” in the BHC Act. Companies owning these institutions are not considered bank holding companies; are not required to comply with the BHC Act’s restrictions on activities; and with one exception, they are not subject to the Federal Reserve’s oversight. The statutory exemptions from the definition of “bank” were established by the Competitive Equality Banking Act of 1987 (CEBA), which also expanded the definition of “bank“ in the BHC Act to include all FDIC-insured institutions. The CEBA exemptions include ILCs, limited purpose credit card banks, trust banks and S&Ls.type of exempt institution, ILCs, began in the early 1900s as small, state- chartered loan companies that served the borrowing needs of industrial workers who were unable to obtain noncollateralized loans from commercial banks. The ILC industry experienced significant asset growth in the 2000s, and ILCs evolved from small, limited-purpose institutions to a diverse group of insured financial institutions with a variety of business models. S&Ls are exempt institutions but S&L holding companies were subject to holding company supervision by OTS and now the Federal Reserve. In addition, S&L holding companies are subject to restrictions on activities set out in HOLA. We also considered one type of institution One that was exempted by the Bank Holding Company Act Amendments of 1970, municipal deposit banks. Table 2 identifies the federal regulators for the certain types of exempt institutions. Financial institutions that are exempt from the BHC Act definition of bank make up a small percentage of the overall banking system—1,002 institutions (about 7 percent)—and include ILCs, limited-purpose credit card banks, municipal deposit banks, trust banks with insured deposits, and S&Ls. If S&Ls, which are different from the other types of exempt institutions in that they are regulated by the Federal Reserve at the holding company level, are excluded, the percentage drops to less than 1 percent, or 57 institutions. Determining whether the holding companies that own exempt institutions are commercial is difficult, given the lack of a standard definition and limited publicly available data on exempt institutions. The risk profiles for exempt institutions vary, reflecting differences in the institutions’ size, complexity, and level of banking and nonbanking activities. The assets of institutions exempt from the definition of bank in the BHC Act that we reviewed account for about 7 percent of the total assets in the U.S. banking system. S&Ls account for almost 7 percent of all FDIC-insured institutions, as of June 30, 2011. The 57 institutions among the other types of exempt institutions as of 2011 held less than 1 percent in the assets of FDIC-insured banks.limited-purpose credit card banks (10), trust banks (3), and municipal deposit banks (10). These exempt institutions were generally small in terms of assets. For example, only 8 of the 57 exempt institutions had assets of more than $5 billion, and more than half of them had assets of less than $500 million. Appendix II contains additional information on these 57 exempt institutions, including their federal regulators and asset sizes. The 57 non-S&L exempt institutions were ILCs (34), Aside from S&Ls, the largest category of exempt institutions is ILCs, which have been declining in number and size in recent years. Since 2006, the number of ILCs has declined from 58 to 34, and the assets of these institutions have dropped from $212.7 billion to $102.4 billion. Federal regulators and industry representatives attributed these declines to several factors, but most frequently to the federal moratoriums on deposit insurance for new ILCs. In particular, FDIC imposed a moratorium on deposit insurance for new ILCs in 2006, and no ILCs have been approved since then.during the 2007-2009 financial crisis, a number of the larger ILC holding companies applied and were approved to become bank holding companies, including American Express Company; Goldman Sachs Group, Inc.; Morgan Stanley; and GMAC Financial Services. Merrill Lynch & Co. also owned an ILC that became part of the Bank of America Corporation, a bank holding company, when it acquired Merrill Lynch in 2008. Subsequent to the FDIC moratoriums, the Dodd-Frank Act placed a 3-year moratorium on FDIC approval of deposit insurance applications received after November 23, 2009, for ILCs, credit card banks, and trust banks that were directly or indirectly owned or controlled by a commercial firm. In addition, the Dodd- Frank Act provides that until July 21, 2013, FDIC may not approve any change in control of an ILC, trust bank, or credit card bank that would place the institution under the control of a commercial firm. The combined assets of limited-purpose credit card banks, trust banks, and municipal deposit banks totaled $10.3 billion as of June 30, 2011. The assets of the 10 limited-purpose credit card banks, which issue only credit cards, totaled $8.5 billion, and the assets of these limited-purpose credit card banks ranged from $3 million to $4.7 billion. credit card banks sell their credit receivables to the parent company, so their assets are typically small. Four limited-purpose credit card banks issue what are called private-label cards, while three issue general-purpose credit cards and two offer both types. The 10 municipal deposit banks’ assets totaled $1.5 billion, and the three trust banks’ assets totaled about $318 million, as of June 2011. Credit card issuers are any person who issues a credit card or the agent of such person with respect to such card. As shown in figure 1, ILCs, limited-purpose credit card banks, municipal deposit banks, and trust banks are geographically concentrated. For example, limited-purpose credit card banks are located in 10 states. ILCs are located in five states—California, Hawaii, Nevada, Minnesota, and Utah. All 10 municipal deposit banks are located in New York, and the 3 trust banks are located in Georgia, Maryland, and Massachusetts. In contrast, as of June 30, 2011, 945 S&Ls (including both federally and state-chartered S&Ls) were in operation and of these approximately 426 are owned by S&L holding companies, concentrated primarily in New England, the Northeast, and the Midwest as of June 30, 2011. Determining whether holding companies that own ILCs, limited-purpose credit card banks, municipal deposit banks, and trust banks are commercial or noncommercial is challenging, for several reasons. For example, the lack of publicly available data on the holding companies’ revenue sources complicates efforts to determine the ownership type. Some holding companies that own exempt institutions are not public companies and thus are not required to submit filings that contain such information that would be publicly available. In addition, regulators do not make the distinction between commercial and noncommercial ownership. FDIC officials told us that they focused on the activities and risks of the exempt institutions and their holding companies regardless of type. The Dodd-Frank Act sets forth a definition of “commercial”: companies are considered commercial if revenue from financial activities (as defined under Section 4(k) of the BHC Act) generates less than 15 percent of their annual gross revenue. Using this definition, a number of companies that are generally considered commercial would be considered noncommercial because their revenue from financial activities is 15 percent or more. For example, using this definition, the General Electric Company is classified as noncommercial because its financial services business segment accounted for more than 31 percent of its 2010 annual gross revenue. Working within these challenges and limitations, we were able to determine the status of the holding companies for 43 of the 57 ILCs, limited-purpose credit card banks, municipal deposit banks, and trust banks. Using the definition of commercial from the Dodd-Frank Act and publicly available financial data, we determined that 11 exempt institutions were owned by commercial companies and 32 by noncommercial companies (see table 3). The status of the holding companies of the remaining 14 institutions could not be determined because of the lack of sufficiently detailed, publicly available financial data about the companies or information from OCC or FDIC. According to information from OCC, one trust bank owned an affiliate as of May 7, 2011. However, under the Dodd-Frank Act definition of commercial, the affiliate is non-commercial. The risk profiles for exempt institutions vary, reflecting differences in the institutions’ size, complexity, and level of banking and nonbanking activities. While few of the exempt institutions are large depository institutions that pose significant systemic risk to the financial system, many engage in several types of banking and nonbanking activities that carry a variety of risks. These risks exist at the depository institution and holding company levels. ILCs. The Federal Reserve and Treasury view these institutions as full-service commercial banks and therefore view the risks they pose as similar to those of commercial banks, including credit risk. The FDIC concurs in this view and noted that many exempt institutions primarily accept brokered deposits, considered to be riskier than demand deposits because of concerns about liquidity risks.can provide a wide range of banking services and are able to make loans (including credit card loans) and investments, like commercial banks. Limited-purpose credit card banks. These exempt institutions are generally restricted to credit card lending activities and are not permitted to conduct many banking activities, such as mortgage or commercial lending. They are not permitted to accept demand deposits. The most dominant risks for these banks are compliance, liquidity, reputational, and to some extent credit risk. Municipal deposit banks and trust banks. These exempt institutions’ banking activities are limited. The sole purpose of municipal deposit banks is to accept municipal deposits, and these banks do not make commercial or consumer loans. Similarly, the three trust banks that are exempt from the BHC Act function only in a fiduciary capacity and do not pose the same types of financial risks as commercial banks. Their risk profile is based on fiduciary responsibility and litigation risk. S&Ls. These exempt institutions offer a range of banking services that are similar to those provided by commercial banks, including offering a variety of banking products, accepting demand deposits and making commercial, real estate, and residential mortgage loans. Because S&Ls are similar to commercial banks, they are exposed to credit, liquidity, operational, reputational, and compliance risks. However, as discussed, unlike the owners of other exempt institutions, S&L holding companies are subject to supervision and regulation at the holding company level, by the Federal Reserve. In addition to their banking activities, commercial ownership of exempt institutions could pose additional risks. Federal Reserve, FDIC and Treasury officials each acknowledged the risk that a commercial holding company may seek to operate an exempt financial institution for the holding company’s own benefit. For example, ILCs and limited-purpose credit card banks could be directed to engage in transactions that benefited the holding company’s affiliates but were detrimental to the financial institutions’ safety and soundness. To address adverse transactions between an insured institution and its affiliates, Congress restricted the ability of insured depository institutions, including exempt institutions, to enter into transactions with affiliates. Insured institutions are subject to both qualitative and quantitative limits on transactions with affiliates. For example, a bank may not engage in a transaction with an affiliate if the aggregate amount of the bank’s covered transactions with all affiliates would exceed 20 percent of the bank’s capital stock and surplus. In addition, an institution generally cannot purchase low-quality assets from an affiliate. Congress established collateral requirements for credit transactions provided to an affiliate, generally requiring that a credit transaction be secured by collateral having market value of at least 100 percent of the transaction. All covered transactions between depository institutions and their affiliates must be on terms and conditions that are consistent with safe and sound banking practices. Additionally, covered transactions between institutions and their affiliates must occur on market terms, which must be at least as favorable to the institution as those prevailing at the time for comparable transactions with unaffiliated companies. While the regulators view the commercial ownership of exempt institutions as posing potential risks to the financial institution, representatives from exempt institutions countered that such ownership could be a source of strength. In particular, representatives of the 14 ILCs and 3 limited-purpose credit card banks we interviewed said that their holding companies currently could serve as a source of strength to their depository institutions. To assess whether these holding companies could be a source of strength to the financial institution, we analyzed the capitalization of holding companies for ILCs and credit card banks. On average, the holding companies of ILCs and credit card banks we analyzed had higher ratios of equity-to-total assets over the 5-year period than bank holding companies (see fig. 2). The higher ratio shows that these holding companies had a higher, stronger cushion against losses that might occur. The average equity-to-total assets ratios for limited- purpose credit card banks remained above 20 percent over the period. In comparison, the average equity-to-total assets ratio of bank holding companies with total assets of more than $500 million that were required to file financial data with the Federal Reserve remained below 10 percent during the same period. Federal Reserve acknowledged that commercial holding companies may be able to act as a source of strength for exempt institutions. However, they expressed three concerns. First, Federal Reserve officials noted that no federal regulator was assigned to look at the health of the entire holding company for an exempt institution, other than for S&Ls, creating a potential regulatory “blind spot.” The officials explained that a regulator should have the authority to look at the entire organization and not at what affects only the depository institutions. Second, holding companies of ILCs are not held to the same risk management and capital standards as bank holding companies, according to the officials. For example, through consolidated supervision, the Federal Reserve assesses a bank holding company’s risk management functions and its impact on the depository institution. Third, regulators cannot take enforcement actions to compel nonbank holding companies to serve as a source of strength for the exempt institution.could ask holding companies to inject capital into exempt depository institutions and to enter into agreements with them requiring such capital injections when necessary. Under the Dodd-Frank Act, as described earlier, if an insured depository institution is not the subsidiary of a bank holding company or an S&L holding company, the appropriate federal regulator for the insured depository institution will require any company that directly or indirectly controls the insured depository institution to serve as a source of financial strength for the insured depository institution. Although FDIC and OCC can take enforcement action against holding companies that engaged in unsafe and unsound practices affecting the exempt institution, they do not have the same authority as the Federal Reserve to set and enforce minimum capital levels on holding companies. Federal regulation of exempt institutions differs across the banking regulators and is evolving. However, views on the adequacy of the regulation varied with FDIC and OCC and regulated institutions viewing it as adequate and the Federal Reserve and Treasury viewing it as lacking. FDIC and OCC, which oversee ILCs, limited-purpose credit card banks, and trust banks, are focused primarily on the safety and soundness of the exempt institutions. To carry out its supervisory responsibilities, FDIC generally conducts annual full-scope examinations of ILCs and state- chartered limited-purpose credit card banks jointly with the state regulators and assigns each a CAMELS rating. OCC examines federally chartered limited-purpose credit card banks every 12-18 months. We reviewed a total of 18 examinations of exempt institutions with assets of $1 billion or more that FDIC and OCC conducted in 2010 and 2011. We chose examinations of the largest banks (by asset size) because these institutions represented a greater financial risk. FDIC has statutory authority to examine any affiliate of a state nonmember bank (including an ILC) as necessary to determine the relationship of that affiliate to the bank and the effect of that relationship on the bank. 12 U.S.C. § 1820(b)(4). Only one limited- purpose credit card bank that OCC supervised had assets of $1 billion or more, which was one of our criteria for choosing examinations to review. Therefore, we judgmentally selected six other examinations, selecting the most recent examinations. ensure that affiliate agreements with third parties do not cause the bank’s assets to be placed at risk, the ILC management sought reimbursement from the affiliate. Although OCC officials told us that affiliate transactions were reviewed for limited-purpose credit card banks and were considered an important part of the onsite examination, our analysis of seven OCC examination reports showed that affiliate transactions were generally not discussed in detail. According to an OCC lead examiner for credit card banks, aspects of affiliate transactions were included as part of their review of audit, earnings, and management for each of the limited-purpose credit card banks. Because many of the limited-purpose credit card banks rely on the holding company to provide funding for the receivables on a daily basis, the examiners review the transactions to ensure that they are in compliance with the law. The OCC examiner told us that if the examiners had not found any problems with affiliate transactions, the transactions would not be discussed in the reports. However, one examination report noted that a limited-purpose credit card bank had poor documentation relating to its affiliate transactions and had paid an above-market rate to the holding company on a bank deposit. The oversight of S&Ls and their holding companies is evolving, with the significant changes likely occurring at holding company level. As of July 21, 2011, the Federal Reserve assumed responsibility for supervising S&L holding companies in accordance with the Dodd-Frank Act. The Federal Reserve plans apply certain elements of its consolidated supervisory program for bank holding companies to S&L holding companies. The consolidated supervision program, which applies primarily to large and regional bank holding companies, is aimed at assessing and understanding the bank holding company on a consolidated basis. In April 2011, the Federal Reserve issued a notice of intent to provide information to the S&L holding companies on how it plans to supervise them and to solicit feedback. The notice covered consolidated supervision, the holding company rating system, capital adequacy, and small noncomplex holding companies. In particular, the Federal Reserve stated that it intended to apply the same type of consolidated supervision to the S&L holding companies that it applied to bank holding companies and that this supervision could entail more rigorous reviews of internal control functions and consolidated liquidity compared to their previous consolidated supervision. The notice stated that the supervision may also include discovery reviews of specific activities as the Federal Reserve attempts to expand its understanding of certain types of activities. Federal Reserve officials said that the agency would issue a notice for rulemaking and request for comments once a supervisory rating system had been developed. Federal Reserve officials also told us that the agency had organized S&L holding companies into groups based on their size and nonbanking activities for supervision purposes. Large, complex holding companies— those with $50 billion or more in assets—will be assigned permanent onsite examination teams that will provide ongoing supervision. S&L holding companies with assets of between $10 billion and $50 billion will be assigned off-site examiners for monitoring that may not be continuous. For S&L holding companies with assets of less than $10 billion, the Federal Reserve will depend largely on the primary federal regulator— either OCC or FDIC—for the exempt S&Ls. S&Ls with less than $10 billion in assets generally consist only of the S&L and a holding company and thus require less supervision at the holding company level, according to an OCC official. Relying on the work of the primary federal regulator is similar to the Federal Reserve’s approach to supervising small “shell” bank holding companies. The primary federal bank regulator, either FDIC or OCC, is responsible for examining the bank, and the Federal Reserve reviews the holding company information, including financial data such as the capital and liquidity levels and the quality of the risk management at the holding company level. While the Federal Reserve plans to use its consolidated supervisory program for S&L holding companies, it still must decide how it plans to supervise grandfathered unitary S&L holding companies that engage in commercial activities. Federal Reserve officials acknowledged that the regulation and the supervision of grandfathered unitary S&L holding companies that engage in commercial activities presented unique supervisory challenges. They said they would look at these holding companies in a broader framework than OTS had used, because that approach had covered only the impact of the holding company on the S&L. These officials said the new framework that is being developed would allow them to supervise the holding company’s financial activities but not its commercial activities. As noted earlier, the Dodd-Frank Act gave the Federal Reserve the authority to decide whether the grandfathered unitary S&L holding companies should establish intermediate holding companies for their financial activities. Federal Reserve officials, as of September 30, 2011, this decision had not been made for any of the grandfathered unitary S&L holding companies. Representatives from three grandfathered unitary S&L holding companies told us that in theory they supported the establishment of intermediate holding companies for their financial activities. But they added that their support would be contingent on the specifics of the intermediate holding company structure requirement. A holding company can be organized in various ways. All holding companies have a parent company, but the structure of the overall company may consist of a number of intermediate holding companies, which in turn may hold other subsidiaries within the company. For example, GE Money Bank is an S&L that is held directly by General Electric Consumer Finance, Inc., an intermediate holding company, and the parent holding company is General Electric Company. Conversely, the changes to the supervision of exempt S&Ls are likely to be less pronounced. OCC officials told us that they planned to supervise S&Ls in much the same way they supervised national banks and that their supervision would be the same for S&Ls owned by commercial and noncommercial holding companies. OCC will focus on the S&L—not its holding company—and use an approach that is similar to the bank supervision approach used by OCC bank examination staff. OCC has established mixed supervisory teams made up of both national bank and S&L examiners, with the goal of fostering learning and knowledge sharing on S&Ls throughout the organization. In addition, OCC officials told us that they planned to work with the Federal Reserve to coordinate supervision of S&Ls and their respective holding companies. OCC officials told us that, in particular, there would be greater coordination on midsize and large S&Ls, because some overlap may exist in how these institutions are regulated. Representatives from the exempt financial institutions and an academic told us that the current regulatory framework was sufficiently robust. They noted that federal and state regulators were able to examine a wide variety of issues through their examination process and minimize certain risks, such as conflicts of interests between holding companies and their exempt institution subsidiaries, through the examination processes. Industry representatives also suggested that the low number of failures of exempt institutions during the last several years spoke to the robust oversight and strength of the holding company structures. According to our analysis of financial data, no limited-purpose credit card banks and two ILCs failed between 2007 and 2010, compared with hundreds of bank failures. OCC officials told us that they had sufficient authority to examine the affiliates of national banks and could adequately examine the activities of the affiliates that may affect the bank. OCC regulatory and supervisory practices are the same regardless of whether the institution is owned by a bank holding company or not, according to OCC officials. FDIC officials believe that they can adequately supervise exempt institutions but acknowledged the safety and soundness benefits of consolidated supervision. FDIC officials told us that they tried to ensure that institutions complied with all applicable laws and regulations and had sufficient capital. If the parent company runs into trouble, FDIC imposes certain controls through cease-and-desist orders or other enforcement measures in order to insulate the insured depository institution from the failings of its parent company. In 2005, we reported that consolidated supervision was a recognized method of supervising an insured institution, its holding company, and affiliates. We noted that while FDIC had developed an alternative approach that it claimed has mitigated losses to the bank insurance fund, it did not have some of the explicit authorities that other consolidated supervisors possess, and its oversight over nonbank holding companies may be disadvantaged by its lack of explicit authority to supervise these entities, including companies that own large and complex ILCs. In 2007, FDIC officials noted in testimony that the number, size and types of commercial applicants had changed significantly causing the FDIC to carefully examine this new environment. FDIC officials further stated that these changes in ownership structures raise potential risks that deserve further study and represent important public policy issues that are most appropriately addressed by Congress. Federal Reserve and Treasury officials contend that the exemptions represent gaps in the current regulatory structure that pose risks to the financial system. Federal Reserve and Treasury officials said while exempt institutions have access to federal deposit insurance, most are not subject to consolidated supervision. As discussed earlier, these officials believe that the lack of consolidated supervision of institutions that are federally insured represent a supervisory “blind spot” that should be removed. In particular, no federal regulator of the exempt institutions, excluding S&Ls that are part of holding companies, has the authority to broadly review the holding company and the other nonbank subsidiaries within the holding company structure. As a result, some of the potential activities within the holding company that may affect the exempt institution may be missed. Treasury’s 2009 regulatory reform proposal attempted to address these concerns by recommending that the exemptions to the BHC Act be removed and that companies owning ILCs, credit card banks, and trust banks become bank holding companies subject to Federal Reserve consolidated supervision. With its enactment in 2010, the Dodd-Frank Act included a 3-year moratorium on approving FDIC insurance for ILCs, credit card banks, and trust banks that are directly or indirectly owned or controlled by a commercial firm. The Dodd-Frank Act also established the Financial Stability Oversight Council (FSOC), which is charged with determining whether institutions are systemically important, among other responsibilities. If FSOC were to designate an exempt institution or its holding company as a systemically important nonbank financial firm, it would be regulated and supervised by the Federal Reserve. A Federal Reserve official stated that exempt institutions could be identified as systemically important nonbank financial firms. However, the official added that this designation would not address the unbalanced competition of ILCs or the other exempt institutions that would not be designated as systemically significant. These ILCs holding companies would still be able to lend and issue credit through their affiliates without receiving the same supervision and regulation as bank holding companies do, according to the Federal Reserve official. According to representatives from limited-purpose credit card banks and ILCs, commercial holding companies would most likely divest themselves of their exempt institutions if the BHC Act exemptions were removed. The BHC Act restricts bank holding companies’ involvement in commercial activities, among other things. Almost all representatives from exempt institutions that are owned by commercial holding companies told us that divestment was the likely outcome. For example, representatives of all five limited-purpose credit card banks and five ILCs owned by commercial holding companies that we spoke with told us that the parent companies would most likely divest, sell, or liquidate themselves of the exempt institutions. Several representatives from exempt institutions owned by noncommercial holding companies that we spoke with also told us that divestment was likely, although they identified other potential outcomes compared to their counterparts with commercial ownership. For example, three representatives from noncommercial ILCs that we interviewed told us that the holding company could be converted to a bank holding company, the ILC charter could be restructured, or the current business model could be altered to comply with BHC Act requirements. Representatives from one of the noncommercial companies we spoke with stated that the holding company’s ability to compete against larger, more diversified commercial banks would be reduced. Representatives from grandfathered S&Ls owned by commercial companies similarly told us their companies would likely divest themselves of the S&L if the exemptions were removed. Although the Federal Reserve now has the authority to require the grandfathered unitary S&L holding companies to establish intermediate holding companies, current law does not address this issue for the other institutions that are exempt from the BHC Act. We asked representatives from ILCs and credit card banks owned by both commercial and noncommercial holding companies about establishing an intermediate holding company as a potential strategy if the exemptions were removed. Approximately half of the representatives from ILCs and limited-purpose credit card banks whom we interviewed stated that they were either uncertain about or opposed to the idea of an intermediate holding company. Some of the representatives that held this opinion argued that an intermediate holding company structure would not improve the current regulatory environment or foster greater safety and soundness within the overall holding company. However, representatives from one limited- purpose credit card bank stated that an intermediate holding company could potentially be a compromise. But they added that the utility of such an option would depend on how the policy was implemented and which financial activities were required to be conducted within the intermediate holding company. Representatives of exempt institutions also told us that divesting the exempt institutions could have additional implications for the holding companies, their customers, and their employees. Changes in business models. Representatives from several ILCs and limited purpose credit card banks we interviewed told us that their exempt institution was an integral part of the parent holding company’s business model. Specifically, they stated that the exempt institutions were used to help extend credit or streamline customer finance operations, lower lending or internal costs, or increase customer loyalty. Furthermore for some representatives, divesting their exempt institution would likely require changes in their business models or could reduce revenues for the holding company. For example, three ILCs that we spoke with indicated that divestment would result in a decrease in the parent holding companies’ sales or revenue. Similarly, four of the five limited-purpose credit card banks we spoke with said that in order to continue offering credit without the BHC Act exemptions, they would likely have to use a third-party credit provider, such as one of the large banks that issue credit cards, and would lose interest and late fee income. Changes in customer relationships. Representatives from six ILCs and two credit card banks indicated that losing the financial institution could result in a significant loss of customers, damage customer relations for the parent company, or both. Officials from one ILC stated that if the holding company could no longer rely on the BHC Act exemption and divested itself of its ILC, its current customers would lose access to the revolving credit that the company issued through the ILC. Furthermore, they said that the ability to offer credit cards increased customer loyalty and provided an additional credit option for customers. Officials from a limited-purpose credit card bank reported to us that owning a financial institution allowed the holding company to retain control of the customer experience over the entire life cycle of the transaction, from marketing to customer service and collection. Increased costs of operations. Representatives from five ILCs and one credit card bank told us that losing the exemptions could increase costs. That is, if the parent companies divested themselves of their financial institutions, the parent companies’ operating or internal costs could rise because of increased administrative costs—for example, from having to use third-party credit providers. Another group of representatives told us that the ILC charter allowed the institution to market its products nationally from the state of Utah, reducing operational costs. Job losses. Representatives from two exempt institutions told us that if the BHC Act exemptions were removed and the parent company divested itself of the exempt institution, job losses would be likely at the both the financial institution and holding company levels. Additionally, representatives from municipal deposit banks told us that their holding companies would most likely decide to divest themselves of their municipal deposit banks if the exemptions to the BHC Act were removed. Representatives of the one trust bank we interviewed told us that its parent company would likely divest itself of the insured deposits— primarily certificates of deposit—if the exemption for trust banks was removed. According to the officials of the trust bank, the insured deposits and the depository institution were a small part of their overall business, and they would be able to carry out their trust functions without the insured deposits. They said that they primarily maintained the insured depository institution because it had been a part of the organization for historical reasons. Removing the exemptions to the BHC Act would likely have a limited impact on the overall credit market given the small portion of the credit market that exempt institutions represent. As shown in table 4, ILCs and limited-purpose credit card banks each accounted for less than 1 percent of the loans on the balance sheets of FDIC-insured institutions in 2010, while municipal deposit banks and trust banks each accounted for no loans. In addition, S&Ls that were subsidiaries of grandfathered unitary S&L holding companies (grandfathered S&Ls) accounted for about 2.9 percent of loans, and other S&Ls accounted for about 4.6 percent of loans. Given the small market share of each type of exempt institution, any actions they might take if the exemptions were removed—including exiting the market altogether in the case of some grandfathered S&Ls, ILCs, and limited-purpose credit card banks—would likely have little impact on the overall credit market, at least at the national level. However, exempt institutions could have larger market shares in some regions and smaller market shares in others. To the extent that the credit market is segmented by region, the effects of removing the exemptions would likely be larger in regions where exempt institutions are a larger share of the market and smaller in regions where exempt institutions are a smaller share of the market. While removing the exemptions would likely have a limited impact on the overall credit market, doing so could have a larger impact on segments of the market in which exempt institutions have larger market shares. These shares remain relatively small, however. For example, in 2010, ILCs accounted for about 1 percent of multifamily, commercial, and farm real estate loans and about 2 percent of non-credit-card consumer loans on the balance sheets of all FDIC-insured institutions, but they accounted for less than 1 percent of each of the five other types of loans we analyzed (construction and land development loans; residential mortgage loans; commercial, industrial, and agricultural production loans; credit card loans; and leases). Limited-purpose credit card banks, on the other hand, accounted for about 1 percent of credit card loans, but they accounted for less than 1 percent of construction and land development loans and almost none of any other type of loan. Grandfathered S&Ls accounted for no leases; less than 1 percent of commercial, industrial, and agricultural production loans; and for 2 to 5 percent of each other type of loan. Other S&Ls accounted for more than 9 percent of residential mortgages, less than 1 percent of credit card loans and leases, and for 1 to 5 percent of each other type of loan. Although the actions exempt institutions might take if the exemptions were removed may differ by the type of institution, the magnitude of the effects of these actions on credit markets—overall or in specific segments—are likely related to each type of exempt institution’s share of the market. The overall credit market would likely remain unconcentrated even if exempt institutions exited the market and transferred their loans to other institutions. To assess the impact of removing the exemptions on concentration among FDIC-insured institutions, we calculated the Herfindahl-Hirschman Index (HHI), a key statistical indicator used to assess market concentration and the potential for firms to exercise market power. As shown in table 5, the HHI for the overall loan market for 2010 is well below 1,500, the threshold for moderate concentration, as are the HHIs for six of the seven specific loan markets we analyzed (credit card loans were the exception). As a result, firms in the overall loan market and in most market segments likely have little ability to exercise market power by raising prices, reducing the quantity of credit available, reducing innovation, or otherwise harming customers. However, the HHI for the market for credit card loans is close to the threshold for moderate concentration, suggesting that one or more firms making credit card loans may have a moderate amount of market power. Furthermore, our HHIs are for the United States as a whole, and HHIs for markets in specific states or metropolitan areas within the U.S. are likely to be different. If the exemptions were removed, some exempt institutions might exit the credit market and stop making loans. As previously discussed, representatives from some ILCs and limited-purpose credit card banks owned by commercial parent companies indicated that their parent companies would likely divest themselves of their exempt institutions if the exemptions were removed. To estimate the effect of the divestment of grandfathered S&Ls owned by commercial companies, we estimated the change in the HHI for each loan market in alternative scenarios in which all grandfathered S&Ls, all ILCs, or all limited-purpose credit card banks ceased making loans and transferred the loans on their balance sheets to the firms remaining in the market. In the first scenario, we assumed that exiting institutions’ loans were distributed proportionally among all remaining firms. In the second scenario, we assumed that the exiting institutions’ loans were acquired by the largest remaining firm. The estimated changes in the HHIs indicated that the overall loan market was unlikely to become concentrated in any of these scenarios. Even in the event that all grandfathered S&Ls, all ILCs, or all limited-purpose credit card banks exited the credit market, the remaining firms would still have little market power and thus little ability to increase loan prices or reduce the quantity of loans available. In every market we analyzed, except credit card loans, estimated changes in the HHIs indicated that these markets were also unlikely to become concentrated in similar scenarios. However, our definition of the market excludes other providers of credit, including uninsured affiliates of FDIC-insured institutions, finance companies, credit unions, and other institutions that are not FDIC-insured. Our estimates may be either overstated or understated, depending on the number and sizes of the credit providers we excluded. Although available data suggest a degree of concentration in the credit card loan segment, the likely impact of removing the exemptions on this market varies across institution types. We found that the HHI for the market for credit card loans in 2010 was close to the threshold for moderate concentration. However, under current conditions, estimated changes in the HHI were small—less than 100—in scenarios in which all ILCs or all limited-purpose credit card banks ceased making loans and transferred their portfolios to other FDIC-insured institutions. Removing the exemptions for these institutions would likely not lead to significant increases in market power in the credit card loan market. In contrast, the HHI for the credit card loan market increased by more than 100 in scenarios in which grandfathered S&Ls ceased making credit card loans and transferred their portfolios to other FDIC-insured institutions. In these scenarios, the increase in concentration in the credit card loan market could be large enough to significantly increase market power for some of the remaining firms and might lead to price increases or reductions in availability of credit card loans. Once again, this definition of the market excludes other providers of credit, including uninsured affiliates of FDIC- insured institutions, finance companies, credit unions, and other institutions that are not FDIC-insured. Our estimates may be either overstated or understated, depending on the number and sizes of the credit providers we excluded. Some representatives of exempt institutions also expressed concern that removing the exemptions could increase concentration in the market for credit card loans and reduce the availability of credit in certain niche markets. Representatives from five limited purpose credit card banks and several ILCs and their parent companies reported that if the exemptions were removed, the parent company would most likely divest itself of the credit card bank or ILC rather than convert to a bank holding company. As a result of divestment, some stated that their credit portfolios would most likely be acquired by large credit card issuers or banks, argued that divestment BHC Act exempt institutions could potentially increase credit market concentration, or restrict access to credit for some customers. Two exempt institutions said that credit to borrowers with limited access to general purpose credit would be affected. Representatives from several ILCs and two credit card banks also told us that they made a significant proportion of loans in niche markets, including student loans, small business loans, and vehicle and equipment loans and leases to businesses and consumers involved in activities such as specialized retail sales, insurance, transportation services, and taxi cab operations. Representatives from 3 exempt institutions and their parent companies also indicated that they offered specific credit products that commercial banks did not offer and served customers that commercial banks typically did not serve. One large credit card issuer told us that it had developed cobranded credit card arrangements for certain businesses, such as a small customer machinery tool manufacturer, and designed programs to serve a particular demographic for a particular retailer. This credit card issuer told us that it had invested substantial resources in developing a user friendly, secure, and reliable nationwide structure customized to a particular group in order to win cobranding relationships with retailers in niche markets. However, one academic we spoke with said that traditional banks and other lenders would likely not expand into niche consumer credit markets, because these institutions lacked the market expertise of such credit card banks and ILCs. The lack of data on activity in niche markets prevented us from measuring concentration and estimating potential changes to it in scenarios in which exempt institutions ceased to make loans. Representatives from some exempt institutions also expressed concerns about the availability of credit to certain niche markets if the exemptions were removed. Federal Reserve officials told us that they believed that credit would continue to be available to creditworthy customers, even if the exemptions were removed and some institutions no longer provided credit. When we discussed the issue of credit availability in niche markets with the Federal Reserve, an official explained that the agency generally used FDIC’s Call Report data to analyze credit markets and that the reports did not include data on niche credit markets. Although Federal Reserve officials acknowledged that removing the exemptions for credit card banks and ILCs could affect the price and quantity of credit available in some niche markets in which those institutions operated, they expected that other financial institutions would step in and make credit available to qualified borrowers at prices determined by the market. The officials stated that they had not seen any data supporting the idea that exempt institutions offered better terms than commercial banks. Moreover, they stated that companies that currently owned exempt institutions could continue to provide credit to their customers through institutions without insured deposits, such as finance companies, which are not permitted to have insured deposits. One example of this type of nonbank finance company is the Ford Motor Credit Company, a wholly owned subsidiary of the Ford Motor Company that finances Ford automobiles and supports Ford dealers but does not accept FDIC-insured deposits. Representatives from three exempt institutions stated that if the exemptions were removed, they would see no additional improvement in safety and soundness. Other exempt institution representatives explained that they did not consider consolidated supervision a stronger model than the FDIC and state regulator model for exempt institutions. In addition to not improving safety and soundness, some representatives from exempt institutions stated that removing the exemptions would likely result in further credit market concentration. For example, representatives from a limited-purpose credit card bank noted that their share of the market would likely be absorbed by large credit card issuers as their holding company would likely divest their institution if the exemptions were removed. OCC officials have not expressed concerns about the sufficiency of the current oversight of exempt institutions and FDIC officials acknowledged the safety and soundness benefits of consolidated supervision. Federal Reserve and Treasury officials maintained that the safety and soundness of exempt institutions would be improved if the BHC Act exemptions were removed because exempt institutions—and their holding companies— would be subject to consolidated supervision. Consolidated supervision allows regulators to understand an organization’s structure, activities, resources, and risks, and to address financial, managerial, operational, or other deficiencies before they pose a danger to subsidiary depository institutions. However, Federal Reserve officials acknowledged that consolidated supervision needed to be improved in light of the financial problems experienced by several bank holding companies during the 2007-2009 financial crisis but noted that they had learned many lessons from the crisis. For example, according to the Federal Reserve officials, regulated institutions, particularly large U.S. banking organizations, had complained to federal banking regulators, including the Federal Reserve, about unregulated entities taking over more of their business. Their concerns and influence contributed to a less than a rigorous application of safety and soundness standards by federal regulators, which was one of the causes for the recent financial crisis. Representatives from a former ILC holding company that became a bank holding company agreed with the Federal Reserve and Treasury’s view on the merits of subjecting exempt institutions to consolidated supervision, noting, for example, that their holding company was now required to implement more robust risk management systems than it had previously maintained. Federal Reserve officials also stated that financial system stability would improve if the exemptions from the BHC Act were removed. They noted that the risk posed by the exempt institutions should not be discounted based on their relative size and small number of the institutions, as the size and number of the institutions could change in the future. For example, Federal Reserve officials told us that if the exemption were not removed and the Dodd-Frank moratorium expired, the number and size of ILCs could grow to the much higher levels that they had reached prior to the financial crisis. Furthermore, Federal Reserve officials noted that maintaining these exemptions resulted in differing regulatory oversight, raising questions about whether the exemptions provide an unfair competitive advantage. For example, holding companies of exempt institutions (aside from S&L holding companies) are not subject to the same level of scrutiny as bank holding companies—despite enjoying the benefits of being FDIC insured. Federal Reserve officials also cited other potential competitive concerns introduced by maintaining the exemptions. For example, a large company that owns an exempt insured depository institution could direct that institution to (unfairly) deny credit to the parent company’s competitors. Moreover, the parent company could encourage the affiliated exempt insured depository institution to offer loans to the company’s customers based on terms not offered to its competitor’s customers. The impact of removing the exemption and addressing risks posed by exempt institutions varies. For example, the Dodd-Frank Act requires the holding companies for S&Ls, which are by far the largest in number and size, to be supervised by the Federal Reserve. S&L holding companies will be subject to capital requirements and other regulatory requirements similar to those applicable to bank holding companies. In contrast, the other exempt institutions are few in number and size, but their holding companies are not subject to Federal Reserve’s supervision. In addition, the banking activities of the exempt institutions vary—for example, ILCs conduct activities similar to those of full-service commercial banks and limited-purpose credit card banks conduct few banking activities—and these activities carry different risks. The moratorium on approving federal deposit insurance for ILCs, credit card banks, and trust banks is set to expire in 2013. Federal Reserve officials told us that they plan to continue to watch changes in the number or size of exempt institutions, as they have previously, consistent with their position that the exemptions represent gaps in the regulatory structure which may pose risks to the financial system. They also said they would bring forward any concerns about exempt institutions which may pose a risk to financial system stability to FSOC. Ultimately, the decision to remove the BHC Act exemptions is a policy decision that involves trade-offs among a number of competing considerations, including potentially increasing concentration in certain credit markets and decreasing consumer choice and the availability of credit in certain regions and credit markets and addressing existing regulatory gaps and potential competitive impacts. We provided a draft of this report to the Federal Reserve, FDIC, OCC, the New York State Department of Financial Services, and Treasury for their review and comment. Treasury provided written comments that have been reprinted in appendix III. Treasury agreed with our description of the agency’s views on the exemption from consolidated Federal Reserve supervision for holding companies owning companies exempt from the BHC Act definition of bank. In addition, Treasury noted that it recommends that the appropriate federal agencies maintain continued oversight to the extent legally permissible within their respective existing authorities over all holding companies owning insured depository institutions. We also received technical comments from the New York State Department of Financial Services, FDIC, and OCC, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and to the relevant agencies. This report will also be available at no charge on our website at http://www.gao.gov. Should you or your staff have questions concerning this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. This report examines (1) the number of certain institutions in the U.S. banking system that are exempt from the definition of bank in the Bank Holding Company Act (BHC Act) and identifies general characteristics of these institutions; (2) the federal regulatory system for the exempt financial institutions and views of exempted entities; and (3) the potential implications of subjecting holding companies for the exempt institutions to the BHC Act relating to the types of activities in which such institutions and their holding companies may engage, the availability and allocation of credit, the stability of the financial system and the economy, and the safe and sound operations of such institutions. To determine the extent to which certain financial institutions were exempt from the BHC Act, we requested data from Federal Deposit Insurance Corporation (FDIC), Board of Governors for the Federal Reserve System (Federal Reserve), Office of the Comptroller of the Currency (OCC), and the Office of Thrift Supervision (OTS) relating to the number of exempt institutions, their geographic location, their asset size, and the parent holding company. We also interviewed officials from the FDIC and OCC to obtain their understanding of the exemptions listed in the BHC Act. Once we established the type of institutions that were exempted from the BHC Act, we collected data from the FDIC, Federal Reserve, and OCC on these institutions from 2006 through 2010. The data included: asset size, geographic location, and primary federal regulators. We also interviewed the state banking departments of California, Nevada, New York, and Utah to collect information on the industrial loan companies (ILC) and municipal deposit banks, which are exempt from the definition of bank under the BHC Act and are state- chartered institutions. We tested the reliability of the data provided to us by the federal banking regulators and determined it to be sufficiently reliable for our purposes. To do this, we interviewed the regulators on how they identified institutions that were exempt from the BHC Act and what process they used to identify the institutions and then compared the lists from the federal banking regulators. As part of this comparison, we looked for any duplicates or inconsistency between the regulators. To determine whether ILCs, limited-purpose credit card banks, municipal deposit banks, and trust banks were owned by commercial holding companies, we reviewed information from the federal bank regulators on the holding companies of the exempt institutions and analyzed public information, if available, on the holding companies to identify the business segments of the company. The types of publicly available information that we examined included Securities and Exchange Commission’s (SEC) filings and company annual reports. Using this information, we identified the annual gross revenue and the business segments that created the revenue. Using the activities listed in Section 4 (k) of the BHC Act, we compared the activities of the holding company listed in the public documents to identify the activities considered financial in nature and then determined the extent to which their 2010 annual gross revenue was produced by financial activities. In accordance with the Dodd-Frank Wall Street Reform and Consumer Protection Act, if 15 percent or more of a company’s activities were financial, we classified it as noncommercial. Companies that derived less than 15 percent of their revenue from financial activities were classified as commercial. To describe the federal regulatory system for the exempt financial institutions, we reviewed 18 examinations of exempt institutions with assets of $1 billion or more that FDIC and OCC conducted in 2009 and 2010. We selected examinations for review based on the institutions’ asset size, choosing larger institutions because of the potential risks they posed. The examinations we reviewed included 11 ILCs and 7 limited- purpose credit card banks. Of the OCC-supervised limited-purpose credit card banks, only one institution met our criteria so we reviewed the most recent examinations of the OCC-supervised limited-purpose credit card banks. Our review of examinations did not include trust banks and municipal deposit banks because their asset sizes were much lower than $1 billion. We focused on the larger institutions because we determined that the regulators generally dedicated more resources to them, such as placing examiners onsite and concluded that if certain supervisory practices are not taken on the larger institutions, then they would not likely be implemented for the smaller institutions. We reviewed documentation from FDIC, the Federal Reserve, and OCC about their supervision practices, including information from both OCC and the Federal Reserve on how they plan to carry out their new responsibilities for savings and loans (S&L) and their holding companies. We interviewed officials from FDIC, the Federal Reserve, OCC, and OTS regarding the supervision of all BHC Act exempt institutions, including S&L and holding company supervision and an academic who recently completed a study on ILCs for a think tank organization. To assess the extent to which credit markets are likely to be affected if the exemptions are removed, we calculated market shares for each type of exempt institution in loan markets as of June 30, 2010. We defined the market as the collection of all FDIC-insured institutions for which we could obtain balance sheet data as of June 30, 2010. We obtained lists of FDIC-insured institutions from the Summary of Deposits (SOD) data available on FDIC’s website. We obtained balance sheet data from each institution’s Call Report or Thrift Financial Report from SNL Financial, a financial industry database. Some institutions indicated that they were subsidiaries of other institutions in the data and that their parent institution reports consolidated balance sheet data for both institutions on the parent institution’s balance sheet. In these cases, we removed the subsidiary institution from the sample in these cases to avoid double-counting them. We identified seven groups of institutions: (1) commercial banks and all subsidiaries of bank holding companies, (2) limited-purpose credit card banks, (3) ILCs, (4) municipal deposit banks, (5) trust banks, (6) S&Ls that are subsidiaries of grandfathered unitary savings and loan holding companies (“grandfathered S&Ls”), and (7) other S&Ls. All institutions that are subsidiaries of bank holding companies are in the first group. The two groups of S&Ls are distinguished by the types of holding companies of which they are subsidiaries. Prior to the enactment of the Gramm- Leach-Bliley Act (GLBA) in 1999, unitary S&L holding companies could generally operate without activity restrictions. GLBA restricted companies that filed applications to acquire an S&L after May 4, 1999, to only engage in activities permissible for S&L holding companies. Existing unitary S&L holding companies were “grandfathered” and could continue to engage in any type of financial or commercial activities. Thus, some S&Ls are subsidiaries of grandfathered unitary S&L holding companies that are not subject to activity restrictions, while other S&Ls are either subsidiaries of holding companies that are subject to activity restrictions or are not subsidiaries of holding companies. We obtained lists of limited-purpose credit card banks, ILCs, municipal deposit banks, and trust banks as of September 30, 2010, or December 31, 2010, from FDIC, the Federal Reserve, and OCC. We then used institution histories obtained from FDIC’s Bank Find website (Bank Find) to adjust those lists to reflect institutions’ types as of June 30, 2010. We used FDIC’s SOD data to identify S&Ls. To further identify grandfathered S&Ls, we obtained a list of grandfathered unitary S&L holding companies and their subsidiaries as of December 31, 2010, from OTS. Because the unitary S&L holding companies were grandfathered in 1999, the savings and loans that were their subsidiaries as of December 31, 2010, must also have been their subsidiaries as of June 30, 2010. That is, an S&L could not have become a subsidiary of a grandfathered unitary S&L holding company between June 30, 2010, and December 31, 2010. Finally, we used FDIC’s SOD data to identify commercial banks and all institutions that are subsidiaries of bank holding companies. All institutions that are subsidiaries of bank holding companies—including limited-purpose credit card banks, ILCs, municipal deposit banks, S&Ls, and trust banks—are put in the group containing commercial banks and bank holding company subsidiaries. We estimated each group’s share of the market for various types of loans, including total loans and leases; construction and land development loans; residential mortgage loans, multifamily, commercial, and agricultural real estate loans; commercial, industrial, and agricultural production loans; credit card loans; consumer loans other than credit card loans; and leases. A group’s market share is equal to the total dollar value of loans on the balance sheets of all institutions in the group as a percent of the total dollar value of loans on the balance sheets of all institutions in the market. To assess the extent to which the price of credit and the quantity of credit available are likely to be affected if the exemptions are removed, we calculated the Herfindahl-Hirschman Index (HHI) of market concentration in loan markets. The HHI is a key statistical indicator used to assess the market concentration and the potential for firms to exercise market power. The HHI reflects the number of firms in the market and each firm’s market share, and it is calculated by summing the squares of the market shares of each firm in the market. For example, a market consisting of four firms with market shares of 30 percent, 30 percent, 20 percent, and 20 percent has an HHI of 2,600 (900 + 900 + 400 + 400 = 2600). The HHI ranges from 10,000 (if there is a single firm in the market) to a number approaching zero (in the case of a perfectly competitive market). That is, higher values of the HHI indicate a more concentrated market. Department of Justice and Federal Trade Commission guidelines as of August 19, 2010, suggest that an HHI between 0 and 1,500 indicates that a market is not concentrated, an HHI between 1,500 and 2,500 indicates that a market is moderately concentrated, and an HHI greater than 2,500 indicates that a market is highly concentrated, although other factors also play a role in determining market concentration. To calculate HHIs, we defined a firm as the collection of all FDIC-insured institutions that are subsidiaries of the same parent company (for institutions that are subsidiaries of parent companies) or the institution itself (for institutions that are not subsidiaries of parent companies). Parent companies of FDIC-insured institutions are either bank holding companies, S&L holding companies, or other parent companies. We identified bank holding company parents and all their subsidiaries for each year using FDIC’s SOD data. We obtained lists of S&L holding company parents and their OTS-regulated subsidiaries from OTS. Based on data for 2011, we assumed that each savings bank that is not a subsidiary of a bank holding company is either a standalone institution without a parent company or is the only FDIC-insured subsidiary of its parent holding company. We obtained data on other parent companies— the nonbank holding company, non-S&L holding company parent companies of some credit card banks, industrial loan companies, and trust banks—for 2010 from FDIC and OCC. A limitation of this strategy is that we may not have identified all the institutions that belong to the same other parent company. As a result, our HHIs may understate the amount of concentration in the market. We calculated the HHI for the markets for various types of loans, including total loans and leases; construction and land development loans; residential mortgage loans; multifamily, commercial, and agricultural real estate loans; commercial, industrial, and agricultural production loans; credit card loans; consumer loans other than credit card loans; and leases. We first calculated each firm’s market share as the total dollar value of loans on the balance sheets of all institutions in the firm as a percent of the total dollar value of loans on the balance sheets of all institutions in the market. We then summed the squared market shares of every firm in the market to obtain the HHI for that market. For groups composed of grandfathered S&Ls (part of a unitary S&L holding company), ILCs, and limited-purpose credit card banks, we estimated the change in the HHI for each loan market in alternative scenarios in which each group of exempt institutions ceases to make loans and transfers the loans on its balance sheets among firms in the market. In the first scenario, we assumed that the exiting institutions’ loans are distributed proportionally among remaining firms. In the second scenario, we assumed that the exiting institutions’ loans are acquired by the largest firm remaining in the market. A limitation of including only FDIC-insured institutions in our market share and HHI calculations is that we exclude many institutions that do not have FDIC insurance but that provide credit, such as uninsured affiliates of FDIC-insured institutions, credit unions, and finance companies. Capital markets are another source of funds. Thus, our calculations may overstate exempt institutions’ share of loan markets. Furthermore, our calculations may either overstate or understate the amount of concentration in loan markets, depending on the numbers and sizes of the firms we are excluding. Our analysis implicitly assumes loan markets are national markets, that is, that credit provided by an institution is available to any potential borrower, regardless of their respective geographic locations. We make this assumption because subnational loan data are not readily available. If loan markets are not national in scope, then our market share and market concentration estimates are unlikely to represent those that we would estimate for a specific subnational geographic region, such as a state or metropolitan area. The market share and market concentration estimates for some regions would likely be greater than our national estimates, while others would likely be lower. We assessed the reliability of all of the data used to determine the potential implications of removing the exemptions and found that the data were sufficiently reliable for our purposes. To do this, we interviewed the regulators on how they identified institutions that were exempt from the BHC Act and what process they used to identify the institutions and then compared the lists from the federal banking regulators. In addition to these quantitative analyses, we interviewed representatives from 31 exempt institutions and representatives from the American Bankers Association and the Independent Community Bankers Association to learn more about their views regarding the BHC Act exemptions and possible implications of the institutions losing their exempt status. In addition, we interviewed representatives from two ILC holding companies that recently became bank holding companies to obtain their views on bank holding company supervision from the perspective of a former ILC holding company. We selected the institutions for interview based primarily on the size of the exemption institutions and the commercial status of the holding company. We attempted to interview the largest institutions and those which were held by holding companies that would be considered commercial. We conducted a content analysis of the qualitative information that we obtained from these interviews to identify themes that emerged. We also interviewed FDIC, Federal Reserve, OCC and Department of the Treasury officials to obtain their views on the implications of removal the exemptions. In addition, we interviewed three commercial banks, which are large credit card issuers, to collect additional information on potential concentration in credit card issuing if the exemptions were removed. We conducted this performance audit between October 2010 and January 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Certain companies are exempt from the regulation as bank holding companies under the Bank Holding Company Act (BHC Act) because their subsidiaries do not meet the definition of a “bank” under the BHC Act. These exempt institutions include savings and loans (S&L), industrial loan corporations, limited-purpose credit card banks, municipal deposit banks, and trust banks. While S&L holding companies are not regulated under the BHC Act, after the Dodd-Frank Act, their treatment will be similar to that of bank holding companies. Therefore, we exclude S&Ls from this analysis. We identified 57 exempt institutions: 34 industrial loan corporation (ILC), 10 limited-purpose credit card banks, 10 municipal deposit banks, and 3 trust banks. Excluding S&Ls, ILCs comprise the largest number of institutions that rely on the BHC Act exemption. As of September 30, 2011, there were 34 ILCs (see table 6). Limited-purpose credit card banks are also exempt under the BHC Act. As of September 30, 2011, there were 10 limited-purpose credit card banks (see table 7). Municipal deposit banks are another type of exempt financial institution. As shown in table 8, all 10 municipal deposit banks are located in New York. Trust banks are another type of exempt financial institution. Trust banks act as fiduciaries and as of September 30, 2011, there were three in operation (see table 9). In addition to the individual named above Andrew Pauline, Assistant Director; Tarik Carter; Emily Chalmers; William Chatlos; Rachel DeMarcus; Nancy Eibeck, Fred Jimenez; Courtney LaFountain; Marc Molino; Tim Mooney; and Bob Rieke made major contributions to this report.
The Bank Holding Company Act of 1956 (BHC Act) establishes the legal framework under which bank holding companies—that is, companies which own or control banks—operate and restricts the type of activities that these companies may conduct. The BHC Act excludes from these restrictions certain companies because the financial institutions they own are exempt from the BHC Act definition of “bank”. However, these exempt institutions are eligible for FDIC insurance raising questions about continuing to exempt their holding companies from BHC Act requirements. The Dodd-Frank Wall Street Reform and Consumer Protection Act directs GAO to study the implications of removing the exemptions. This report examines (1) the number and general characteristics of certain institutions in the U.S. banking system that are exempt from the definition of bank in the BHC Act, (2) the federal regulatory system for exempt financial institutions, and (3) potential implications of subjecting the holding companies of exempt institutions to BHC Act requirements. GAO analyzed data and exams from exempt institutions and regulators, and examined regulators’ guidance and policies. GAO also interviewed regulators and officials from 31 exempt financial institutions. We provided a draft of this report to the relevant agencies. Treasury provided written comments and we received technical comments from other agencies which we incorporated as appropriate. The 1,002 exempt financial institutions make up a small percentage of the assets of the overall banking system—about 7 percent—and include industrial loan corporations (ILC), limited-purpose credit card banks, municipal deposit banks, trust banks with insured deposits, and savings and loans (S&L). Although exempt from the BHC Act, S&L holding companies are regulated by the Federal Reserve System Board of Governors (Federal Reserve) under the Home Owners’ Loan Act as amended. Excluding S&Ls, the number of exempt institutions drops to 57 that comprise less than 1 percent of banking system assets and there is a 3-year moratorium on the approval of federal deposit insurance on select exempt institutions that ends in 2013. These institutions vary by size, activities, and risks. Larger institutions such as ILCs provide banking services similar to those of commercial banks and carry many of the same risks. Other exempt institutions are smaller, provide only a few services such as credit card loans and related services, and thus have lower risk profiles. Federal regulation of the holding companies of exempt institutions and their affiliates varies. The Federal Deposit Insurance Corporation (FDIC) and Office of the Comptroller of the Currency (OCC) oversee ILCs, credit card banks, and trust banks, and focus their supervision on the institutions, not the parent holding companies. They examine the institutions for safety and soundness and for potential conflicts of interest in transactions with affiliates and the holding company. In contrast, the Federal Reserve oversees bank and, more recently, S&L holding companies using consolidated supervision that allows examiners to look at all entities and affiliates in the structure. OCC officials and representatives of exempt institutions viewed the current oversight was sufficiently robust. FDIC officials indicated that supervision of the exempt institutions themselves was adequate, but noted that consolidated supervision authorities provide important safety and soundness safeguards. Officials from the Federal Reserve and Department of the Treasury (Treasury) stated that the exemptions should be removed, given that exempt institutions have access to FDIC insurance and the holding companies of most types of exempt institutions are not subject to consolidated supervision. The implications of subjecting exempt institutions and their holding companies to the BHC Act vary. While many officials from the exempt institutions owned by commercial holding companies said that the institutions would be divested, data suggest that removing the exemptions would likely have a limited impact on the overall credit market given the overall market share of exempt institutions is small. Views varied on how removing the exemptions would improve safety and soundness and financial stability. Some officials from exempt institutions said that financial stability could be adversely affected by further concentrating market share. Federal Reserve officials noted that institutions that remain exempt are not subject to consolidated supervision but could grow large enough to pose significant risks to the financial system, an issue they plan to continue to watch.
DOE’s responsibility for contractors’ litigation costs has its roots in the early nuclear programs. Since the inception of these programs in the 1940s, the federal government has relied on contractors to operate its nuclear facilities. However, because of the high risk associated with operating these facilities, the agencies responsible for managing nuclear activities—from the Atomic Energy Commission to DOE—included litigation and claims clauses in their management and operating contracts. These clauses provide that litigation expenses are allowable costs under the contracts. In addition, judgments against the contractors arising from their performance of the contracts are reimbursable by DOE. Over the past several years, class action lawsuits have been filed against many past and present contractors responsible for operating DOE’s facilities. In general, these suits contend that the operation of the facilities released radioactive or toxic emissions and caused personal injury, emotional distress, economic injury, and/or property damage. These suits have been filed against the current and former operators of certain DOE facilities throughout the country, such as the Fernald Plant in Fernald, Ohio; the Hanford Site near Richland, Washington; the Los Alamos National Laboratory in Los Alamos, New Mexico; the Rocky Flats Plant in Golden, Colorado; and various other facilities. (App. I lists ongoing class action suits against DOE contractors during fiscal years 1991-93.) DOE has the option of undertaking the defense against such class action litigation on its own; however, it has generally opted to have the contractors defend the case in good faith. As standard practice, DOE has authorized contractors to proceed with their defense and has limited its own involvement to approving the hiring of outside counsel, reviewing billings, and agreeing upon any settlement amounts. The cognizant DOE field office is responsible for funding each contractor’s litigation and overseeing the litigation effort. DOE has not maintained complete information on the costs of litigation against present and former DOE contractors. According to officials from DOE’s Office of General Counsel, costs for contractors’ legal defense are budgeted and controlled by each responsible contractor and field office. These officials said that each DOE field office, through its Office of Chief Counsel, is responsible for managing the costs associated with its contractors’ litigation. The officials added that DOE headquarters has not maintained overall cost data because it was not involved in the day-to-day management of these cases. Nevertheless, DOE has collected some data indicating that it is incurring substantial costs for the services of outside law firms. In 1993, a subgroup of DOE’s Contract Reform Team surveyed the Chief Counsels’ offices to determine how much DOE was spending to reimburse its contractors for their legal expenses. According to the data the subgroup collected, DOE contractors paid over $31 million to outside law firms in fiscal year 1992 and almost $24 million during the first 8 months of fiscal year 1993. The subgroup attributed these large costs to “toxic tort” class action lawsuits filed against current and former contractors reporting to DOE’s Albuquerque, Oak Ridge, and Richland operations offices. The costs associated with these class action suits are large, in part, because several of the suits involve multiple contractors and law firms. Many lawyers work on each case, and the monthly costs can exceed $500,000. The In Re: Hanford case, for example, has six former and present DOE contractors as codefendants, and 10 separate law firms are representing them. In just 1 month in 1992, DOE paid for the services of 62 outside attorneys, 25 of whom billed at least $200 per hour, and 44 legal assistants working on the case. The cost of these services alone was over $455,000. (See app. II for detailed information on the billings for this particular month.) DOE has incurred additional costs for contractors’ litigation that were not reflected in the data collected by DOE. The most significant of these are costs for establishing data bases. For each of the major class action lawsuits we examined—In Re: Hanford, Cook et al. v. Rockwell/Dow, In Re: Los Alamos, and Day v. NLO—the contractors and the outside legal firms have established data bases of documents and other information. According to DOE officials in the field offices and representatives of the contractors, these data bases provide unique capabilities to identify and retrieve information needed for the contractors’ legal defense. The costs for these data bases increase DOE’s total outside litigation costs substantially. Data obtained from the cognizant Chief Counsels’ offices show that from fiscal year 1991 through fiscal year 1993, over $25 million was spent for developing litigation data bases for these four cases. The data base for the Fernald litigation was the most costly—exceeding $14 million—but the other data bases cost over $2 million each. (App. III contains information on the costs of data bases.) When the fiscal year 1992 costs for data bases are added to the expenses paid to outside law firms during the same fiscal year, the total costs incurred by DOE for its contractors’ legal defense during that fiscal year exceed $40 million. Other costs that should be considered as litigation-related costs include all funds associated with the activities of NLO, Inc., and the in-house legal costs at current M&O contractors. NLO—a former operator of the Fernald Plant—is currently in existence only to manage its legal defense under a postoperations contract. From fiscal year 1991 through fiscal year 1993, NLO received $15.7 million from DOE—$8 million for costs incurred by outside law firms, an estimated $2.5 million for developing the litigation data base, and much of the remaining $5.2 million for activities directly supporting the litigation. For example, consultants hired by NLO over this period conducted various projects for the outside law firm, NLO staff assisted in activities related to the litigation, and the firm earned almost $1 million in fees for managing the litigation. Similarly, current M&O contractors incurred in-house costs to monitor and manage ongoing legal activities; however, the portion of these costs related to litigation against the contractors is not known. Contractor officials at Oak Ridge, Sandia, and Hanford all stated that they have lawyers on staff who manage outside litigation activities and in some cases participate in litigation activities. The in-house costs related to these activities, however, were not available. The officials said that data are not maintained on the costs related to the internal efforts associated with such litigation. Legal fees represent the largest and most visible cost associated with DOE contractors’ litigation expenses. These costs include the hourly rates charged by the outside attorneys and other expenses incurred by the law firms in defending the contractors. However, DOE exercised little control over these costs. Specifically, DOE did not establish any criteria or guidelines for allowable costs, and it did not develop procedures requiring detailed reviews of law firms’ bills. As a result, DOE paid for legal expenses that would not be allowed under criteria established by certain other federal organizations. Cost guidelines are necessary for contractors and law firms to know what costs will or will not be reimbursed; however, DOE had not developed and implemented such cost criteria. Two federal corporations—the Federal Deposit Insurance Corporation (FDIC) and the Resolution Trust Corporation (RTC)—have developed cost guidelines for outside counsel. These corporations’ guidelines clearly specify what costs will be allowable and at what rates. These guidelines appear to be consistent with an opinion issued in December 1993 by the American Bar Association. The association’s opinion—although nonbinding—suggests that law firms can recoup only reasonable and actual costs for services. Comparing DOE’s reimbursements with the corporations’ guidelines, we found that DOE had paid significantly more than these guidelines allow for professional fees, duplication and facsimile costs, travel costs, and office overhead expenses. The corporations require that discounts on fees for legal services be sought in all cases. Their guidelines direct law firms seeking to represent the corporations to offer a discount on their rates. A corporation official stated that FDIC receives at least a 5-percent discount. Most of the law firms representing FDIC discount their rates by 10 percent—some firms, by as much as 20 percent. DOE, however, did not require its contractors to seek discounts on professional fees from outside law firms. Consequently, few discounts were obtained. Only 2 of the 16 law firms’ bills we examined contained any discounts. If DOE were to adopt this guideline, it could obtain substantial cost savings, as the following example shows. One law firm is representing DOE contractors in three separate class action suits. Over a 3-year period, the firm received $8 million in professional fees for its work on these cases. If a 5-percent discount had been applied, DOE could have saved over $400,000. At a 10-percent discount rate, the savings could have been over $800,000. (See app. IV for further examples of the savings DOE could have obtained through discounts on fees.) Law firms charge for certain administrative tasks that they perform for their clients. One of these tasks is duplicating documents. The corporations’ criteria state that charges for photocopying shall not exceed 8 cents per page. DOE was reimbursing its contractors at a much higher rate. The amounts charged for reproducing documents varied among the DOE contractors’ law firms, ranging from 10 cents per page to 25 cents per page. Gibson, Dunn, and Crutcher charged almost $170,000 for duplicating documents over a 3-year period. For 13 months, the firm charged 25 cents per page, and for 23 months, it lowered the rate to 20 cents per page. Had the firm been allowed to charge only 8 cents per page, the total cost reimbursed by DOE would have been $58,750, a savings of nearly $109,000. Limiting all firms to this rate would have saved almost $425,000. (App. V contains further details on costs for duplicating.) Another administrative task for which DOE was paying high rates is facsimile transmission. An FDIC official stated that this charge is to be billed at the actual cost—the cost of the telephone call. However, several firms representing DOE contractors charged as much as $1.75 per page plus the cost of the long-distance call. For example, the law firm of Gibson, Dunn, and Crutcher was reimbursed by DOE for more than $47,000 in telefax and telecopying charges—in addition to the related telephone charges—over a 3-year period. Travel costs incurred by law firms representing DOE contractors exceeded guidelines set forth by RTC and FDIC. The corporations’ criteria limit travel costs to coach airfare, moderate hotel prices, and federal per diem rates for meals. Travel costs reimbursed by DOE were significantly higher. For example, two firms—Hunton and Williams and Perkins Coie—billed first-class airfare for their senior partners. Additionally, attorneys often were reimbursed for the costs of high-priced hotel rooms. Lawyers from Kirkland and Ellis billed for hotel rooms in Washington, D.C., that cost from $215 to as much as $250 per night. In contrast, the government’s lodging allowance for that city is $113 per night. Additionally, some firms billed for meals costing far more than federal per diem rates. In many cases, the meals cost almost $100 per person. For example, the law firm of Perkins Coie billed for a four-person dinner in New York City costing $95 per person (the federal per diem allowance in this city is $38) and billed for a five-person dinner in Seattle costing $90 per person (the federal per diem allowance in this city is $34). This firm also billed for meal expenses that consisted only of drinks—an expense that is not allowable under federal per diem regulations. Furthermore, some of the meal expenses were incurred for attorneys and staff who were not on travel. One firm—Perkins Coie—billed over $9,000 for expenses labeled as “conference meals” over a 3-year period. Review of the supporting documentation indicates that these expenses were for meals purchased while many of the staff in attendance were not on travel and/or for activities associated with “client development.” In another instance, Crowell and Moring billed not only for the meals of its local attorneys but for the meals of their spouses as well. According to a legal opinion from one DOE operations office, meal expenses for attorneys and staff who are not on travel are not reimbursable. Nevertheless, although such costs were not allowed by contractors within that particular region, they were allowed by other contractors and were reimbursed in full by DOE in other regions. Other costs were incurred and charged to DOE that, under the two federal corporations’ guidelines, are considered to be law firm overhead that should be subsumed within the professional fees. These include costs for word processing services, overtime, utilities and supplies, and legal publications. In many instances, however, DOE allowed these charges. Although these costs could conceivably, in some cases, be appropriately charged and reimbursed, we found many instances in which the charges were inappropriate. For example, Shea and Gardner billed for purchasing American Bar Association publications, such as a guide to taking depositions. Crowell and Moring marked up its telephone charges 25 percent above the actual cost and its computer research 50 percent above the actual cost. Additionally, according to the federal corporations’ guidelines, expenses for activities conducted by lawyers to develop subject matter expertise are not to be charged to the federal corporations. Instead, law firms must absorb the cost of developing an understanding of specialty issues. In contrast, some law firms—Shea and Gardner and Gibson, Dunn, and Crutcher—billed DOE contractors for staff to attend seminars on toxic/radiation litigation. DOE did not have requirements mandating and facilitating detailed reviews by contractors and/or DOE of the bills submitted by law firms. As a result, the quality of the reviews varied greatly, and some reviews were inadequate. For example, one contractor—Westinghouse Hanford Company—performed an internal audit 2 years into the In Re: Hanford litigation and found that it did not have adequate reviews of the legal bills submitted to it. The audit also revealed that several costs that were not allowable under the company’s own in-house criteria had been paid, such as first class airfares. In another instance, UNC, Inc.—a former contractor at Hanford—never examined detailed billings of its principal law firm and instead approved all of its bills on the basis of a monthly two-page billing summary. These summaries lacked detailed information on the activities that each lawyer had performed; in fact, they did not even specify the number of hours that lawyers had worked on the case. DOE’s review of bills was also inadequate. At only one DOE operations office—Oak Ridge—did Chief Counsel officials perform detailed reviews of legal costs before approving bills for payment. This office disallowed numerous costs—including costs for meals charged by lawyers who were not on travel and expenses for seminars—that were allowed by other operations offices. At Albuquerque, few detailed reviews of bills were performed, and when performed, such reviews took place after the bills had been paid. At Richland, bills were approved for payment by the Chief Counsel primarily on the basis of billing summaries, and any detailed reviews were conducted annually or semiannually. In our view, the summaries were not specific enough for a reviewer to determine what the costs were for and whether they were appropriate. Additionally, DOE did not require the bills to be presented in a format that included enough detail to allow a reviewer to understand the basis for the charges. Consequently, even when detailed reviews were performed, many of the charges in the bills could not be adequately assessed. For example, some charges were listed simply for “research” or “reviewing documents,” while others were listed for meetings with specific individuals, but no mention was made of either the purpose of the meeting or the subject discussed. In other instances, activities were cumulated into a daily total and briefly described; this information did not indicate how much time was spent on each activity and whether the time spent was appropriate. Charges for activities performed by attorneys and their staffs might have been questioned if DOE had established adequate review procedures and sufficient criteria for reasonableness. For instance, several firms charged time for staff to prepare monthly bills, review and catalog newspaper articles, prepare security clearance forms, and rearrange or move file rooms. Additionally, General Electric hired a public relations firm to analyze trends in the case and passed these costs along to DOE for reimbursement. In our view, these activities were of such questionable benefit to DOE that a detailed review would have raised concerns about the appropriateness of DOE’s paying for them. DOE has recognized that its controls over contractors’ litigation costs are problematic and has taken some actions to improve them. In March 1994, DOE issued guidance on managing litigation, directing its field office Chief Counsels to ensure that the rates charged are reasonable. The guidance also requires that contractors develop for each case a formal understanding concerning, among other things, allowable expenses, billing procedures, and contractors’ reviews of bills. In testimony before this Subcommittee on July 13, 1994, we stated that although these actions represented a step in the right direction, they did not go far enough. The guidance still gave contractors considerable discretion in controlling costs. Given our experience with the way contractors had applied cost controls in the past, we were not convinced that this guidance would ensure that consistent and effective cost controls were developed and applied to all legal bills. Since the hearing, however, DOE’s Office of General Counsel has begun to develop and adopt additional measures to address the problems identified. On August 25, 1994, DOE issued an acquisition letter (No. 94-13) setting forth interim policies for contracting officers to consider in determining whether particular litigation costs are reasonable. The cost guidelines—which became effective for all ongoing class action suits on October 1, 1994—establish limits and terms for the costs that DOE will reimburse to contractors for outside litigation. For example, the guidelines specify that costs for duplication are not to exceed 10 cents per page; telephone charges, facsimile transmission costs, and computer-assisted research costs are not to exceed the actual costs; airfare is not to exceed the coach fare; and other travel expenses should be moderate, consistent with the rates set forth in the Federal Travel Regulations. The guidelines also set forth DOE’s policy for reimbursing attorneys’ fees, profit and overhead, and overtime expenses, and they designate specific nonreimbursable costs. Additionally, officials from the Office of General Counsel have met with RTC and FDIC officials to gain insight from their experience in developing systems for auditing bills to determine the reasonableness of both the professional activity and the related expenses. A staff has been assembled in headquarters to develop requirements and procedures for reviewing bills and to conduct detailed review of bills. Chief Counsel staff in regional offices are also developing review procedures that will be coordinated with the headquarters requirements. DOE is still in the initial stages of developing an audit function but plans to have one in place by early 1995. Furthermore, a cost-reporting system is being implemented that will provide monthly reports on all litigation. This reporting system will collect Department-wide cost data in a consistent format. According to DOE’s General Counsel, this system will report all costs, including data base costs and contractors’ in-house costs, within 10 days after the end of each month. DOE plans to compare the actual with the budgeted costs for each case to better ensure that the costs remain reasonable. This system is now operational, although Office of General Counsel officials acknowledge that the data are not yet complete. Finally, DOE is consolidating its legal defense in various cases—a measure with the greatest cost-saving potential. The In Re: Hanford case, for example, has six codefendants—each represented by at least one law firm and some by as many as three firms. DOE acknowledges that duplication of effort is likely and, with it, unnecessary costs. To prevent further duplication, DOE informed the codefendants that beginning in fiscal year 1995, it would not reimburse any contractor for the services of any outside counsel other than the law firm selected to serve as lead counsel for the litigation. At the time this report was being completed, a lead contractor had been designated and that contractor—with concurrence from DOE—had selected a lead counsel. DOE estimates that by consolidating, it will reduce its annual outside litigation expenses by nearly 60 percent, saving millions of dollars on this case alone. Office of General Counsel officials estimated that these efforts—establishing cost criteria, implementing an audit function, and consolidating class action cases—would save DOE $5 million to $7 million annually. During fiscal years 1991 through 1993, DOE incurred large litigation costs but, in many cases, did not have the internal controls needed to ensure that these costs were appropriate. At a recent hearing before this Subcommittee, we discussed these problems and, as a result, DOE began to improve its management of contractors’ litigation costs. If DOE’s recent efforts are fully implemented and successful, substantial cost savings could accrue to the government. Additionally, DOE should have cost controls and case management principles in place to ensure that any future lawsuits are handled efficiently. DOE is to be commended for its quick and thorough response to the problems we identified. However, it remains to be seen whether or not these new procedures will be universally implemented within DOE’s field offices and whether or not all contractors will accept and abide by these new procedures. We discussed the facts in this report with DOE officials, including the General Counsel and other officials from the Office of General Counsel. They agreed with the facts presented; however, they expressed concern that the tone of the report might lead readers to believe that DOE was not addressing the problems we had identified. They provided comments and information on the actions they are taking to reduce litigation costs and improve cost controls. We have incorporated these comments into the report where appropriate. As requested, we did not obtain written agency comments on a draft of this report. We performed our work between November 1993 and August 1994 in accordance with generally accepted government auditing standards. Appendix VI contains details on the objectives, scope, and methodology of our review. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of the report to the appropriate Senate and House committees; interested Members of Congress; the Secretary of Energy; and other interested parties. We will make copies available to others on request. Major contributors to this report are listed in appendix VII. If we can be of further assistance, please contact me at (202) 512-3841. Contractor(s) NLO, Inc. Attorney (2) Paralegal/ Litigation support (7 people) Gibson, Dunn, and Crutcher Williams, Kastner, Gibbs (continued) Contract Staff - Clerk(s) Contract Staff - D.E. Clerk (continued) Helsell, Fetterman, Martin, Todd, and Hokanson (continued) Stoel, Rives, Boley, Jones, and Grey Davis, Wright, Tremaine Data not available. On October 29, 1993, the Chairman of the Subcommittee on Oversight and Investigations, House Committee on Energy and Commerce, asked us to review the Department of Energy’s (DOE) expenses for outside litigation. After discussions with the Chairman’s office, we agreed to (1) determine how much DOE was spending for litigation to defend its contractors, (2) evaluate whether adequate controls are in place to ensure that all of these costs are appropriate, and (3) assess the efforts being made by DOE to improve its controls over these outside litigation costs. To respond to this request, we met with staff in DOE’s Office of General Counsel in Washington, D.C., to obtain an overall perspective on the litigation activities of the Department’s various contractors, the underlying issues associated with such litigation, and the rationale for DOE’s paying the costs of the contractors’ litigation. Additionally, we selected and visited three of DOE’s operations offices—Albuquerque, Oak Ridge, and Richland—and examined records of the litigation activities and costs incurred in each office. We selected these offices because DOE data indicated that these offices had incurred about 75 percent of the Department’s expenses for contractors’ litigation. To address the first objective, we discussed litigation costs with DOE headquarters and operations office officials. We discussed the types of costs associated with the litigation and the records maintained on these costs. We also obtained and reviewed data covering the period from October 1991 through May 1993 compiled by an internal DOE litigation management task force assessing the costs of litigation. To verify the data on costs for outside legal firms’ services developed by the task force and to attempt to obtain complete cost data for fiscal year 1993, we examined available records at the three operations offices, including the data that were submitted to the task force, supporting documentation, and various other records detailing expenditures for outside legal firms’ services. However, we were not able to obtain sufficient data on costs to ensure that the amounts provided to the task force were accurate or to calculate the total costs for fiscal year 1993. In addition, we discussed other costs of litigation with these DOE officials and obtained data from them detailing the costs of developing litigation data bases. We also contacted contractors and law firms responsible for developing and managing the data bases and obtained data on the costs incurred. Furthermore, we discussed in-house costs with contractor officials at all three operations offices. To address the second objective, we (1) evaluated the charges and expenses of the outside law firms engaged by the contractors and (2) assessed the process used by the contractors and DOE to review these costs. We obtained and reviewed the billings of outside law firms involved in four major class action suits: In Re: Hanford, Cook et al. v. Rockwell/Dow, In Re: Los Alamos, and Day v. NLO. We examined the supporting documentation for the various charges, and when the available data were insufficient, we contacted the contractors and/or law firms to obtain information on the rates and charges for activities, or in some cases, we visited the law firms to review documentation supporting the charges. We did not, however, obtain and examine law firms’ internal documents supporting the hourly charges of individual lawyers or legal assistants. To evaluate the reasonableness of the law firms’ charges and expenses, we compared these costs to the guidelines developed and used by the Federal Deposit Insurance Corporation and the Resolution Trust Corporation. These federal corporations use outside law firms to conduct much of their legal work and have had cost guidelines in place for several years to ensure that the expenses they incur for litigation are reasonable. We judged the corporations’ guidelines to be an appropriate benchmark for evaluating the costs incurred by DOE. Additionally, we used the American Bar Association’s Formal Ethics Opinion 93-379 as another guide for judging the reasonableness of the law firms’ charges. Finally, we met with a litigation management consultant to obtain further guidance on reasonable and prudent costs to be paid for legal services. To assess the adequacy of the review of the law firms’ billings, we discussed review procedures with each DOE operations office we visited and obtained available documentation that showed evidence of review and comment on the law firms’ charges. In addition, we met with representatives of the contractors—DuPont, Martin Marietta Energy Systems, NLO, UNC, the University of California, and Westinghouse Hanford Company. We discussed review procedures by telephone with Atlantic Richfield Hanford Corporation, Dow Chemical Company, General Electric, and Rockwell International. To keep apprised of DOE’s efforts to develop and implement cost controls over litigation costs, we discussed actions proposed by the agency with officials from the Office of General Counsel at DOE headquarters and the Office of Chief Counsel at the Albuquerque, Oak Ridge, and Richland operations offices. We obtained documents detailing the actions DOE intends to take to better control litigation costs and ensure more effective litigation management. Furthermore, we discussed planned procedures for auditing law firms’ bills with the official responsible for this activity in DOE’s Office of Inspector General. Peter Fernandez Ernie V. Limon, Jr. John E. Cass The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the Department of Energy's (DOE) efforts to control its litigation costs, focusing on: (1) the amount DOE spends on litigation to defend its contractors; and (2) whether DOE controls are adequate to ensure that these legal costs are appropriate. GAO found that: (1) although DOE cannot accurately determine the total amount it reimburses contractors for their outside litigation costs, preliminary findings show that in 1992, DOE spent about $40 million on its contractor litigation costs; (2) most DOE contractor legal costs are incurred through the hiring of outside law firms; (3) DOE does not have effective cost controls for reimbursing outside legal services; (4) DOE has been billed at higher rates than other federal entities for professional legal fees, travel, word processing, document duplication, and other litigation expenses because it has not effectively overseen contractor payments or developed adequate criteria that define which costs are reimbursable; and (5) DOE efforts to improve its cost controls include issuing specific cost guidelines, instituting procedures for periodically reporting all litigation costs, establishing audit functions that enable it to conduct detailed reviews of legal bills, and consolidating cases involving multiple contractors and law firms to improve case management and reduce costs.
The federal government has held funds in trust for Indian tribes since 1820. Enacted in 1887, the General Allotment Act, also known as the Dawes Act, provided for the division of Indian tribal lands into allotments of up to 160 acres for individual tribal members and families. Subsequently, the Indian Reorganization Act, enacted in 1934 and also known as the Wheeler-Howard Act, ended the allotment of tribal lands and extended indefinitely the period that the federal government would hold allotted lands in trust. Many of these allotments remain in trust today, now jointly owned in common by hundreds and, in many cases, thousands of individual Indians, each with an undivided—or fractionated—interest in the whole parcel. As trustee for tribes and Indians, the Secretary of the Interior is required to account for the revenue generated by each interest (amounting, in some cases, to less than 1 cent per year), invest the trust funds, and provide other trust services to the beneficiaries. The Secretary also is responsible for maintaining official Indian land title and ownership records, managing natural resource assets, and probating estates. Much of this responsibility has been delegated to BIA, which has 12 regional offices and 85 agency offices that are located on or near reservations. Beginning in April 1997, Interior has issued several strategic plans for implementing trust reforms. Concerned that Interior had not achieved the desired improvement in trust management, the Secretary in January 2002 initiated an effort to develop a comprehensive, departmentwide approach for improving Indian trust management. On March 28, 2003, Interior issued the Comprehensive Trust Management Plan, which presented a strategic plan to guide the design and implementation of integrated trust reform efforts. Interior’s performance of fiduciary trust business practices nationwide was documented and reported in the As-Is Trust Business Model Report. The information contained in the Comprehensive Trust Management Plan and the As-Is Report is the foundation for the recommendations for reengineered business processes that appear in the To-Be Model—or Fiduciary Trust Model. The Fiduciary Trust Model contains implementation strategies for major business processes, and currently serves as Interior’s guide for trust reform. As a basis for revising the department’s approach for improving Indian trust management, Interior contracted with Electronic Data Systems in 2001 to determine how trust reforms were then being conducted and how they could be improved. The firm’s recommendations included both improvements in trust management and a reorganization of Interior’s agencies carrying out trust management and improvement. In response to these recommendations, the Secretary of the Interior reorganized BIA and OST in April 2003. The reorganization increased OST’s SES positions from 7 to 14 by (1) creating 6 OST regional trust administrators, located at OST’s Albuquerque headquarters, who are responsible for providing account holders with trust services and with overseeing fiduciary trust officers and other personnel in the field and (2) adding an SES position in realigning OST’s management structure by creating three divisions. As shown in table 1, OST’s budget has grown from $34.1 million in fiscal year 1997 to $222.8 million in fiscal year 2006. Similarly, OST’s full-time equivalent positions have increased from 245 employees in fiscal year 1997 to 590 employees in fiscal year 2006. While the growth in budget and staff mainly reflect OST’s efforts to implement reforms and its growing responsibility for trust fund management, OST’s funding also supports other Indian-related activities. For example, in fiscal year 2006, OST transferred (1) $54.4 million to the Office of Historical Trust Accounting, (2) $34.0 million for implementing the Indian Land Consolidation Act activities, (3) $7.6 million to the Office of Hearings and Appeals, (4) $5.6 million to the Interior Solicitor’s Office to cover costs associated with the Cobell v. Kempthorne lawsuit, (5) $1.3 million to BIA for tribal contract and compact appraisals, and (6) $300,000 to Interior’s Chief Information Officer. OST began funding the Office of Historical Trust Accounting in fiscal year 2001 and activities related to the Indian Land Consolidation Act in fiscal year 2000. Responsibility for Indian land appraisals was transferred from BIA to OST in 2002 and is currently managed by OST’s Office of Appraisal Services. In addition to its trust reform activities, OST is responsible for maintaining trust-related Indian records and developing trust investment strategies for beneficiaries. In 1999, OST created the Office of Trust Records to ensure that Indian records are maintained and safeguarded. In September 2003, Interior signed a Memorandum of Understanding with the National Archives and Records Administration to create a national repository for American Indian records, including fiduciary trust records, in Lenexa, Kansas. OST’s Division of Trust Funds Investment is responsible for managing and investing individual Indian and tribal assets. OST is allowed to invest trust funds only in securities backed by the federal government, including U.S. Treasuries and securities from government-sponsored agencies. OST has implemented several key trust fund management reforms, but OST has not prepared a timetable for completing its remaining trust reform activities or identified a date for its termination under the 1994 Act. OST estimates that almost all of the key reforms needed to develop an integrated trust management system and to provide improved trust services will be completed by November 2007, but OST believes some additional improvements are important to make. In particular, once the validation of BIA’s new trust asset and accounting management system (TAAMS) leasing information for Indian lands with recurring income is completed, BIA and OST plan to validate the leasing information for Indian lands that do not have recurring income. The Special Trustee expects these validation activities will be completed by December 2009. Despite the 1994 Act’s requirement, OST has not proposed a termination date for the office once trust reforms are completed. The Special Trustee noted that Interior will need OST’s staff to continue to perform their functions after trust reforms are completed, whether or not OST is terminated, because OST was given responsibility for managing trust fund operations and other trust-related activities after the 1994 Act was enacted. The Special Trustee also added that OST will reduce its expenditures once key trust reforms are completed by terminating contracts, but he believes that OST’s current staff is about the right size needed to manage OST’s operations after trust reforms are completed. However, because OST has not developed a workforce plan that reexamines the expenditures and staffing levels needed for trust fund operations once trust reforms are completed, additional opportunities may exist to further reduce expenditures and OST staff. OST has made important progress in implementing trust fund management reforms and plans to complete almost all of the key reforms by November 2007. Specifically, OST is responsible for trust reforms associated with the trust funds accounting system and the overall integration of the various trust reform automated systems. BIA and OST are responsible for trust reforms associated with its implementation of the TAAMS system for managing land title records and leasing activities for Indian lands. NBC is responsible for developing a management system for Indian land appraisals. OST is responsible for implementing the following trust reforms: Trust Funds Accounting System (TFAS). In March 1998, OST awarded a contract to SEI Investments to use a modified version of its commercial trust accounting system that provides basic collection, accounting, investing, disbursing, and reporting functions. TFAS replaced a module in BIA’s Integrated Records Management System and two OST systems, which could not fully perform trust accounting functions. TFAS was deployed in August 1998 and was fully operational in May 2000. OST continues to contract with SEI Investments at a cost of about $14 million per year for operations and general maintenance, which includes system upgrades twice annually. TFAS is an accounting and investment system that enables the automated production of account statements for individual Indians and tribal account holders. It also allows, for example, automated trade settlements, automated payments of financial asset income, daily securities pricing, and automated reconciliation. In addition, landownership and leasing accounts will be included in TFAS as part of BIA’s and OST’s TAAMS conversion project to ensure that both systems contain accurate and complete information. Trust Funds Receivable. In 2004, OST awarded a contract to Bank of America to centralize the collection of trust payments through a single remittance-processing center, also known as a lockbox, to minimize the risk of loss or theft. Under phase I of the new system, which became effective in October 2005, trust payments are sent to the processing center in Prescott, Arizona, for deposit into trust fund accounts. Previously, BIA and OST personnel in agencies for each of the 12 regions collected trust payments for trust fund account holders and then mailed or deposited the payments. Phase II of this project is to have all collections and distributions automated in TFAS. However, implementation requires the completion of the validation of the land title and leasing data in TAAMS. According to OST officials, full automation of all collections and distributions is scheduled for November 2007. OST officials said that two agencies in BIA’s Southern Plains region completed Phase II by the end of June 2005— the remaining agencies in BIA’s Southern Plains region and one agency in BIA’s Eastern Oklahoma region completed Phase II by the end of January 2006. Several agencies in BIA’s Great Plains region completed Phase II by the end of June 2006—the remaining agencies in BIA’s Great Plains region and several agencies in BIA’s Northwest region completed Phase II by the end of August 2006. BIA’s Rocky Mountain region completed Phase II by the end of July 2006, and BIA’s Navajo region and several agencies in BIA’s Western regions completed Phase II by the end of September 2006. In addition, OST has completed its desktop procedures for handling the receipt of trust funds, and BIA is completing its desktop standardization procedures, with some assistance from OST. Trust Beneficiary Call Center. In December 2004, OST established the Trust Beneficiary Call Center, a centralized call center in its headquarters office in Albuquerque, New Mexico. Through a toll-free telephone number, the call center provides timely responses to beneficiaries’ questions and allows them to access account information. In addition, the call center operators and staff have recently received training and access to TAAMS through OST’s trust portal to enable them to better answer questions about beneficiaries’ assets. If a beneficiary’s question cannot be answered, the call center operator is to refer the question to an OST Fiduciary Trust Officer, generally colocated at the BIA field agencies, to research and respond accordingly. The call center was fully operational by December 2005. In establishing the call center, calls were redirected from preexisting toll- free telephone numbers at BIA field agencies. OST officials told us that the Trust Beneficiary Call Center has helped to relieve some of the workload from OST and BIA staff in the field. OST data show that, as of July 2006, the call center had received over 135,000 calls from beneficiaries, with a first-line resolution rate of about 89 percent. Trust Portal. OST completed the implementation of its trust portal in May 2006. OST’s trust portal provides employees with a single point of access to applications and other resources, such as the trust funds receivable system and the intranet. Currently, the trust portal is available to OST employees and some BIA employees. According to an OST official, various contractors developed the trust portal and OST staff maintain it. Risk Management Program. Beginning in 1999, OST has contracted with CD&L to develop and refine the risk management program for establishing management controls to monitor and evaluate the effectiveness of Interior’s trust operations. The risk management program has evolved over the past few years—the original risk management product was a stand- alone compact disk application that provided an assessment tool to evaluate OST’s business operations. Since then, a Web-based risk management tool, the RM-Plus tool, has been developed to facilitate data collection and reporting for all Interior bureaus and offices with Indian trust responsibilities. OST implemented the RM-Plus in August 2004 and has contracted with Chickasaw Nation Industries (CNI) to operate and maintain the tool. BIA used the RM-Plus tool in 2006 to produce its financial assurance statement at the Southern Plains pilot location. OST officials said that additional revisions are being made to the RM-Plus tool in response to the new requirements in the Office of Management and Budget’s Circular A-123 for ensuring the accountability and cost- effectiveness of agency programs. The RM-Plus is currently being revised to incorporate the circular’s requirements and is scheduled to be completed by March 2007. If other Interior bureaus and offices with trust responsibilities decide to use the RM-Plus tool, OST will assist them by providing advice and access to the RM-Plus tool. BIA and OST are implementing the following trust reforms to develop centralized systems for managing land title records and leasing activities as well as managing and tracking probates for Indian lands: TAAMS. In December 1998, Interior awarded a contract to Artesia to develop TAAMS, a centralized system with two components for managing Indian trust assets: the TAAMS land title system and the leasing module. Over the years, Artesia was bought out by several contractors. Currently, the TAAMS contract is with CGI-AMS. BIA’s TAAMS land title system maintains both current and historical titles—some of these historical titles in the system date back to the original land grant. This system was completed in January 2006. The TAAMS leasing module tracks leases of Indian assets. BIA and OST are currently converting leasing data from BIA’s old legacy systems to TAAMS and integrating TAAMS with TFAS to ensure that both systems have accurate and complete title and leasing information. As a region’s system is converted, OST will provide beneficiaries with asset statements that identify the source of the funds and a listing of assets owned in that region and any active encumbrances, as required by the 1994 Act. Prior to the conversion, the statements that beneficiaries receive will only include information on account balances and account transactions. Before leasing data are converted into TAAMS, BIA’s Land Titles and Records Offices and OST—primarily through a contract with CNI—are implementing the data quality and integrity (DQ&I) project to verify the completeness and accuracy of the TAAMS title and leasing information for Indian lands. As part of the verification, the DQ&I teams compare the TAAMS information with the information contained in the BIA region’s legacy realty system for land tract allotments with recurring income. For each land tract allotment for which the owner(s) and the interest they own do not match, the DQ&I teams compare the TAAMS information against source documents to identify (1) conveyances of title through probate records, deeds, and gift conveyances and (2) active encumbrances, including lease permits, rights of way, and timber sale agreements. This verification is scheduled to be completed in all BIA regions by October 1, 2007, covering land tracts with recurring income for which the legacy lease and title systems do not match. OST also plans to verify the accuracy of the land and leasing records for which TAAMS and the legacy realty system have matching information by comparing the TAAMS information with source documents for a sample of these records. OST and BIA plan to verify title and leasing data for tracts of land without recurring income after October 2007, but a schedule for implementing and completing this work has not yet been developed. OST officials noted that the DQ&I project is labor-intensive. The land validation took about 1 hour per tract in BIA’s Southern Plains region because there are about 12 owners per tract. This validation requires more time in BIA’s Great Plains region, which has about 32 owners per tract, and in BIA’s Rocky Mountain region, which has over 100 owners for some tracts. Probate Case Management and Tracking System. BIA used a modified off-the-shelf software program to develop the probate case management and tracking system, also known as ProTrac, for use by BIA, OST, and Interior’s Office of Hearings and Appeals to manage and track probate cases from initiation to closing. BIA constructed the ProTrac database from manual records, spreadsheets, and trust fund records and, according to a BIA official, has verified its accuracy. BIA is currently developing a paperless version of ProTrac that is scheduled to be implemented by June 2007. NBC is implementing the following trust reform to improve the management of Indian land appraisals: Appraisal Management System. NBC is working with OST to adapt its appraisal request and review tracking system to develop the Indian trust appraisal request system. This new system will centralize the appraisal process and track appraisal requests across Indian country, including the period of time it takes to process a request. NBC and OST completed pilot testing the appraisal management system in the Western region in October 2006. OST estimates that the appraisal management system will be fully implemented by March 2007. OST and BIA managers have overseen the progress of each of the key trust reforms scheduled for implementation by November 2007. OST managers also plan to implement two additional trust reforms. First, the managers plan to verify the accuracy and completeness of TAAMS information for (1) a statistical sample of the tracts of land for which the data in TAAMS and the BIA regional legacy systems match and (2) tracts of Indian land without recurring income. The Special Trustee estimates that this work will be completed by the end of 2009. Second, the OST managers plan to work with BIA to replace the oil and gas distribution system within BIA’s Integrated Records Management System that tracks oil and gas revenue from Indian lands. The new system will, among other things, interface with TFAS and the Minerals Management Service’s system. This system is estimated to cost $2.5 million per year and to be implemented by December 2009. Furthermore, Interior is exploring the conversion of Land Title Mapper to the department’s National Integrated Lands System for standardization purposes. The Land Title Mapper uses satellite imagery and geographic information systems to link the data in the integrated computer system with the physical site. The Special Trustee said the mapper could be completed by 2009 or 2010 and noted that, while the mapper is not a component of the 1994 Act’s trust reforms, it would provide an important service to trust account beneficiaries. Additionally, as trust reforms are completed, OST will conduct employee training, promulgate trust-related regulations, prepare internal procedures, and prepare handbooks. The 1994 Act directed the Special Trustee, within 1 year of appointment, to provide the Congress with a comprehensive strategic plan that, among other things, identifies a timetable for implementing the plan’s trust reforms and a date for OST’s termination once reforms have been implemented. However, the Special Trustee has yet to provide the Congress with a timetable for completing the remaining trust reform activities and a date for OST’s termination, even though OST’s most recent strategic plan—the Comprehensive Trust Management Plan—issued in March 2003, stated that OST would be able to forecast a date for termination within the next 14 months. The lack of a timetable for completing the remaining trust reforms has hindered the ability of the Congress, tribal organizations, and the public to fully assess the status of OST’s trust reforms or to plan for trust fund operations once reforms are completed. The 1994 Act includes a sunset provision for OST but allows the Special Trustee to recommend to the Congress that OST continue operations if it is needed for the efficient discharge of Interior’s trust responsibilities. The Special Trustee told us that Interior will need OST’s staff to continue to perform their functions after trust reforms are completed, whether or not OST is terminated, because the Secretary of the Interior transferred additional staff and responsibilities to OST for managing tribal and individual Indian trust fund accounts and providing other trust services after the passage of the 1994 Act. Specifically, in response to direction in the conference report accompanying Interior’s fiscal year 1996 appropriations bill, Secretarial Order 3197 transferred the Office of Trust Funds Management and other financial trust services from BIA to OST. Subsequently, the Secretary transferred BIA’s land appraisal staff to OST. If OST is terminated, it is unclear where OST responsibilities—including trust fund management and accounting operations, beneficiary services, trust records management, and land appraisals—will be transferred. The Special Trustee told us that OST had decided to use contractors, rather than hire additional OST staff, to implement many of the trust reforms as a way to minimize the size of its permanent staff—the contracts will end once key trust reforms are completed. The Special Trustee also said OST’s SES positions will be reduced from 14 to 13 in the near future, and he noted that Interior is studying whether efficiencies might exist by combining the Chief Information Officer positions in BIA and OST (see fig. 1 for OST’s current organizational chart and SES positions). However, the Special Trustee believes the size of OST’s staff, including the number of SES positions, is about the right size needed to manage OST’s future operations. OST has not developed a workforce plan that reexamines the expenditures and staffing levels needed for trust fund operations—including managing and accounting for trust funds, providing trust services, maintaining trust records, and conducting land appraisals—once trust reforms are completed. The following opportunities may exist to realign or further reduce expenditures and staffing levels: The Trust Program Management Center, which is responsible for implementing trust reforms, currently has 23 staff whose work will be completed when trust reforms are implemented. However, one OST manager noted that, in some cases, the staff members responsible for implementing a given reform were then reassigned to the OST office with operational responsibilities to ensure continuous improvements are made. OST currently has 131 accounting technicians located in many of BIA’s field agencies whose responsibilities for processing the collections and disbursements of account funds will decrease once trust reforms are completed and accounting functions are automated. However, OST managers noted that it is important to have the accounting technicians in the field to perform account maintenance and research accounts. In addition, a BIA manager noted that many account technicians may still be needed to handle checks that might be given to a local BIA office instead of being mailed to OST’s lockbox facility in Prescott, Arizona. Regardless, no plans have been developed to determine either the appropriate number of the accounting technicians needed to carry out future operations or their roles and responsibilities. The Deputy Special Trustee for Field Operations, the six Regional Trust Administrators, and the Fiduciary Trust Officers have been actively involved in implementing trust reforms by coordinating DQ&I and other activities. It is unclear whether seven SES positions will continue to be needed to provide tribal and individual Indian account holders with trust services and to oversee field operations once trust reforms are completed; especially with OST’s 52 Fiduciary Trust Officers generally colocated in BIA’s field agencies and with the Trust Beneficiary Call Center now in place. However, the Special Trustee noted that each of the Regional Trust Administrators has trust banking or legal expertise for providing tribal and individual Indian account holders with important services, and the administrators will expand their outreach to trust account holders as the reforms are completed. Since its inception, OST has relied on contractors to perform many of its trust reform activities as a way to minimize the size of its permanent staff. In fiscal years 2004 and 2005, OST obligated nearly 21 percent of its appropriated funds to contracting. The trust reform activities performed and products provided by the nearly 350 firms with which OST has contracted vary widely. About 66 percent of contracting dollars from fiscal years 2004 and 2005 went to 2 firms. Since 2003, OST has relied primarily on NBC to award and manage contracts. In a May 2006 report, Interior’s Office of Inspector General found that senior OST managers had created an appearance of preferential treatment of a contractor in violation of the standards of ethical conduct. In response, the Special Trustee required that all OST employees in grades GS-12 and above complete a special 2- hour ethics training course, in addition to the annual mandatory ethics training. OST has relied extensively on contractors to perform many of its trust reform activities. During fiscal years 2004 and 2005, OST spent about $89.7 million, or nearly 21 percent, of its total appropriated funds on contracts. Because 48 percent of these appropriated funds was transferred to other offices, such as the Office of Historical Trust Accounting, the amount OST spent on contracting comprised nearly 40 percent of its available funding for these 2 years. During this period, OST paid about $58.8 million, or 66 percent, of these funds to 2 of the nearly 350 firms it used—CNI received $31.1 million and SEI Investments received $27.7 million. (See table 2 for OST’s obligations to its 10 leading contractors.) CNI provides a variety of trust reform work for OST, including risk management, trust data cleanup and encoding, and the development of policy and procedures manuals. Most of the contracting with CNI, an Indian-owned 8(a) small business, was based on an indefinite delivery, indefinite quantity contract. (See app. II for a description of the work that CNI performed under each task order.) An advantage of using this type of contract is that contract task orders can be awarded quickly because there is no requirement for competition. OST also pays SEI Investments about $14 million a year to operate and maintain a version of its commercial trust fund accounting system adapted to meet OST’s needs. Table 3 shows the 10 leading product or service types for which OST used contractors. Most of OST’s obligations to contractors, about $30.3 million, were for data processing and telecommunications services. For example, the DQ&I project for ensuring the accuracy and completeness of the TAAMS database focuses on (1) assisting BIA with document encoding into the trust systems, (2) validating and correcting critical data elements to their respective source documents, and (3) implementing postquality assurance processes. Other major data processing and telecommunications services include developing OST’s Trust Beneficiary Call Center, identifying the owners of whereabouts unknown accounts, and developing risk management processes. Another major service or product type for which contracting funds were allocated was for financial services, at about $28 million. About 99 percent of these funds went to SEI Investments to operate and maintain TFAS. Contractors also provided products and services to OST that were not directly related to trust reform, such as supplying office furniture or providing guard and security services. As trust reform activities are completed, OST plans to reduce funding for contracting accordingly. For example, OST’s fiscal year 2007 budget request proposed to reduce funding by about $4.9 million as a result of the completion of certain contract efforts, including the following reductions: $1,400,000 from the Office of Trust Accountability for contract costs related to defining, developing, facilitating, and delivering trust training programs; $1,050,000 from the Office of Trust Accountability for contract costs related to the development of policies and procedures and upgrades of systems for the reengineering of trust processes; $885,000 from the Office of Trust Accountability for contract costs related to the modeling of business practices for the purposes of risk management; $675,000 from the Office of Trust Review and Audit for contract costs related to the development of the Indian Trust Examiner certification; and $425,000 and $450,000 from the Offices of Field Operations and Trust Services, respectively, for contractors that were providing accounting services, such as data cleanup and encoding. Prior to 2001, OST relied on NBC to provide contracting services through an interagency agreement. However, at OST’s request, Interior delegated contracting authority to OST in January 2001. This delegation was conditioned on (1) the retention of authority by Interior’s Office of Acquisition and Property Management to oversee and approve specified actions and (2) a subsequent evaluation of OST’s operations. In March 2002, the Office of Acquisition and Property Management conducted an acquisition management review that found several problems with OST’s contracting operations. The review team said that many of the problems they found could easily be fixed, and noted that OST’s contracting office was not fully staffed and was still experiencing “growing pains.” The review team’s draft report, which had three broad recommendations, was provided to OST for comment and OST responded in June 2002. However, the report was never issued in final. Subsequently, in July 2003, OST conducted its own study to (1) evaluate the functioning of OST’s contracting office, (2) assess customer satisfaction with contracting services provided, and (3) determine the feasibility (including a cost/benefit and qualitative analysis) of outsourcing acquisition services to either NBC or another Interior office. The internal review found that, although the contracting office had made substantial improvements in response to the acquisition management review, the office still was not operating as effectively as it could. On the basis of proposals received from organizations that provide contracting services and a qualitative evaluation of these organizations, OST found that NBC’s branch in Denver, Colorado, offered the best value for providing contracting services for OST. As a result, OST signed a 5-year interagency agreement with NBC’s Denver branch to provide contracting services beginning on October 1, 2003. NBC’s headquarters conducted an acquisition management review of NBC Denver’s contracting practices in April 2005 and found that, overall, the office was highly effective in providing contracting services. In 2004, OST also began using NBC’s branch in Fort Huachuca, Arizona, because it is responsible for managing the indefinite delivery, indefinite quantity contract with CNI, as we previously discussed. The contract had been originally awarded to CNI on a sole-source basis, which is allowable under Small Business Administration regulations to provide special procurement advantages to businesses owned by Indian tribes that participate in the 8(a) program. OST has used the contract by placing task or delivery orders for implementing several of the trust reforms. In addition to funding contracts for their own trust reform activities, such as TAAMS, BIA also has administered contracts for OST. For example, since fiscal year 2005, BIA has served as the contracting office for a contract with CD&L for risk management. A BIA official stated that this contract is set to expire in December 2006. Finally, GovWorks, one of several federal government franchise funds designated by the Director, Office of Management and Budget, also has provided contracting services for OST. In July 2003, Interior’s Office of Inspector General received allegations that senior OST officials had given CD&L favorable treatment in awarding contract work. The Inspector General’s May 2006 report found that senior OST officials created an appearance of preferential treatment of CD&L, in violation of both the Standards of Ethical Conduct for Employees of the Executive Branch and an internal OST memorandum directing “Arms Length Dealings with Contractors.” The report documents that over several years, OST awarded and continued to extend, without competition, a contract with CD&L for trust fund accounting and risk management services; while at the same time, senior OST officials engaged in extensive outside social activity and exchanged gifts with CD&L executives. The report also stated that OST contract personnel felt pressured by these senior OST officials to continue to award work to CD&L. The Inspector General referred the matter to Interior to take appropriate administrative action and to review the performance of the CD&L contract. In response, the Special Trustee has required that all OST employees at grades GS-12 or above take a special 2-hour ethics training course. The Special Trustee stated that he was satisfied with CD&L’s trust accounting and risk management services. From January 2001 through September 2003, OST had procurement authority and in-house staff were servicing OST’s contracts. In February 2004, after the contracting function was turned over to NBC’s Denver branch, OST attempted to get a follow-on sole-source contract with CD&L for the risk management program. OST officials were anxious to get a follow-on contract to meet a court-ordered June 2004 deadline for implementing the risk management system for all agencies involved with trust records. However, due to a lack of documentation to support a valid justification and because of prior apparent improprieties, NBC officials refused to award a follow-on sole-source contract. Rather than wait 4 to 5 months to award a new contract under the competitive bidding process at NBC’s Denver branch, OST officials went to BIA and placed an order under the General Services Administration’s Mission Oriented Business Integrated Services program, which required a shorter time period to get a contract awarded. The order was placed with CNI in April 2004, and CNI subsequently hired CD&L as a subcontractor through September 2004 to continue the risk management design work. In January 2005, a competitive contract for additional risk management work was awarded to CNI and CD&L, with BIA as the contracting office. OST is in the final stages of implementing the trust fund management reforms that the 1994 Act required. However, the Special Trustee has not provided the Congress with a timetable for completing these reforms, as required by the act. Without a timetable, the Congress cannot readily oversee OST’s implementation of the trust reforms or plan for trust fund operations once reforms are completed. OST also has not developed a plan for future trust fund operations once reforms are completed. Whether or not OST is terminated, the Special Trustee believes that OST’s staff will need to continue to perform their functions after trust reforms are completed because, after the passage of the 1994 Act, the Secretary of the Interior transferred to OST the Office of Trust Funds Management and other offices and personnel responsible for trust fund operations. In addition, OST has not developed a workforce plan that reexamines the responsibilities and needs for trust fund operations. While the Special Trustee plans to reduce OST’s budget by terminating contracts as reforms are completed, he believes that OST’s current size is about right for trust fund operations once reforms are completed. However, a reexamination of OST’s workforce needs might identify opportunities for realigning or further reducing expenditures and staffing levels because, for example, certain job responsibilities may decrease once trust reforms are completed and accounting functions are automated. To improve congressional oversight of the trust reforms and ensure that trust fund accounting operations, once implemented, are economically staffed, we recommend that the Secretary of the Interior direct the Special Trustee to take the following three actions: Provide the Congress with a timetable for completing the trust fund management reforms. In anticipation of completing the trust reforms, provide the Congress with a plan for future trust fund operations, including, if the decision is made to terminate OST, a determination of where these operations will reside. As trust reforms are completed and contracts are terminated, develop a workforce plan that reexamines and proposes staffing levels and funding needs. We provided Interior with a draft of this report for its review and comment. In its written response, Interior agreed with our recommendations, stating that it expects to have a timetable by late-June 2007 for implementing the remaining trust reforms, including a date for the proposed termination or eventual disposition of OST. (See app. III.) However, Interior disagreed with the number of key reforms we identified and attached to its letter a list of 47 additional reforms that OST has completed. We reviewed the 47 reform efforts on Interior’s list and, while they are important activities for the implementation of OST’s trust reforms, we believe they are not key components of OST’s integrated information system that interfaces the trust funds accounting system with BIA’s land title records and asset management systems for Indian lands. Accordingly, we did not revise our report. In addition, Interior provided comments to improve the draft report’s technical accuracy, which we have incorporated as appropriate. To examine OST’s progress in implementing the American Indian Trust Fund Management Reform Act of 1994, we reviewed (1) the 1994 Act and its legislative history; (2) Interior’s appropriations legislation; and (3) relevant Interior documents, including secretarial orders and OST’s March 2003 Comprehensive Trust Management Plan and prior strategic plans that provide the basis for OST’s current reform efforts. We also reviewed various documents showing OST’s progress in implementing trust reforms and interviewed OST and BIA officials regarding the status of trust reform efforts. However, we did not analyze the adequacy of OST’s efforts to ensure that the reforms will result in an integrated computer system with complete and accurate information. In addition, to gain insight into the concerns that tribal organizations have expressed about OST’s trust reform performance, we interviewed executives of the Intertribal Monitoring Association on Indian Trust Funds, the National Congress of American Indians, the Great Plains Tribal Chairman’s Association, the United South and Eastern Tribes, and the Affiliated Tribes of Northwest Indians. Although the tribal organizations we selected reflect some variation in geography and their members include numerous individual Indian tribes, our selections were not intended to be representative of all tribes. To examine OST’s use of contractors in implementing its trust reforms, we obtained specific data elements for fiscal years 2004 and 2005 from the General Services Administration’s FPDS-NG database. These data elements include the amount obligated, the types of goods or services purchased, and various vendor characteristics. FPDS-NG does not include (1) assistance actions, such as grants and cooperative agreements; (2) imprest fund transactions, training authorizations, and micropurchases valued at $2,500 or less that were obtained through the use of a government purchase card; (3) interagency agreements with other federal agencies and organizations; or (4) actions involving transfer of supplies within and among agencies. Finally, total dollars for fiscal year 2006 are incomplete and were not included in this report. To ensure the completeness and accuracy of the FPDS-NG contracting data, we examined NBC contracting documents and interviewed contracting officers at BIA and NBC’s Denver and Fort Huachuca branches as well as selected contracting officer’s technical representatives at OST. We obtained data from FPDS-NG by searching on OST as the funding agency. However, because NBC officials told us this field was not always completed, we also obtained FPDS-NG data by searching on NBC’s contracting office and identifying, by the product or service description, contracts most likely associated with trust reform efforts. In addition, we compared these data with NBC’s procurement tracking system and made adjustments to the FPDS-NG data as necessary. Where discrepancies were found, we corrected the FPDS-NG data to ensure completeness of the data. On the basis of our testing and correction of FPDS-NG data, we are sufficiently confident of the reliability of the data we are reporting. Furthermore, we reviewed the report and associated workpapers of Interior’s Office of Inspector General regarding allegations that senior OST officials had given CD&L preferential treatment in contracting for risk management services. To assess the performance awards and retention allowances that SES officials at OST had received, we analyzed data for fiscal years 2001 through 2005 from the Office of Personnel Management’s Central Personnel Data File, which contains records for most federal employees and is the primary governmentwide source for information on federal employees. Specifically, we examined the number and dollar amount of performance awards and retention allowances provided to OST and compared them with those of other Interior bureaus and other federal agencies. In addition, we obtained documents from Interior’s Minerals Management Service, which is responsible for providing OST with human resources support services—including (1) processing performance awards and retention allowances provided to SES officials at OST and (2) ensuring compliance with the appropriate procedures for determining such awards and allowances. We conducted our review from February 2006 through October 2006 in accordance with generally accepted government auditing standards. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the report date. At that time, we will send copies to interested congressional committees, the Secretary of the Interior, the Special Trustee for American Indians, the Director of the Office of Management and Budget, and other interested parties. We will also make copies available to others upon request. This report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. The Department of the Interior (Interior) has provided a retention allowance to one Senior Executive Service (SES) manager at the Office of the Special Trustee for American Indians (OST)—the Principal Deputy Special Trustee. In addition, 7 of the 13 SES managers currently at OST received at least two major awards in 2 years or more. However, from fiscal years 2001 to 2005, the average performance award amounts that SES managers received were generally lower than the average amounts provided to other bureaus and offices within Interior and other federal agencies. Interior’s Executive Resources Board (ERB), currently chaired by the Secretary of the Interior and comprising senior Interior managers, made the final determination on all performance awards and retention allowances provided to SES managers. Each calendar year from 1999 through 2005, Interior has provided OST’s Principal Deputy Special Trustee with a retention allowance because, according to agency justifications, her historical knowledge and managerial ability are needed to ensure Interior’s trust oversight and reform success. Specifically, from 1999 to 2005, Interior’s ERB has reviewed and approved the justification for the retention allowance, which raises the Principal Deputy Special Trustee’s total compensation to the maximum allowable for SES employees—excluding 1999 and 2005, when the Principal Deputy Special Trustee’s compensation was slightly under the total maximum allowable. According to officials of Interior’s Minerals Management Service, retention allowances are reserved for special talent and have been provided to only two Interior SES managers—the other SES manager received a retention allowance in 2002 and 2004. In each year, the Principal Deputy Special Trustee’s retention allowance was lower than the maximum amount provided to an SES manager at all other federal agencies (see table 4). The Special Trustee stated that as OST’s trust reforms are completed, the total compensation provided to the Principal Deputy Special Trustee will be reevaluated. Seven of OST’s SES managers received at least two major awards—a performance award, a special act award, an individual cash award, or a time-off award—in 2 years or more, as follows: The Principal Deputy Special Trustee, who has been in the SES since 1993, received three major awards from fiscal years 2000 to 2006. Specifically, the Principal Deputy Special Trustee received two time-off awards of 80 hours each in fiscal years 2000 and 2004 and a performance award of about $9,700 in fiscal year 2006. In addition to these major awards, the Principal Deputy Special Trustee received the Presidential Rank Award in 2002, one of the government’s most prestigious awards. While the Principal Deputy Special Trustee received cash with the award, the full cash amount could not be provided in 2002 because her total compensation was at the maximum allowable for a federal employee. As a result, she received part of the award in 2002 and the rest of the award in 2003. A manager, who has been in the SES since 1996, received eight major awards from fiscal years 1999 through 2006, including at least one award in 7 of the 8 years. Specifically, in fiscal years 1999 and 2000, this manager received three special act awards that ranged from $1,750 to $10,000. In fiscal years 2002 through 2006, this manager received either a performance award or an individual cash award in each year, ranging from $5,000 to $13,000. A manager, who has been in the SES since 2002, received six major awards from fiscal years 2003 through 2006, including at least one award in each year. Specifically, this manager received two special act awards of $5,000 and $10,000, two time-off awards of 40 hours each, an individual cash award of $5,000, and a performance award of about $7,200. A manager, who has been in the SES since 2002, received three major awards—one award per year from fiscal years 2004 through 2006. This manager received a performance award of $6,900 and two time-off awards of 40 hours and 80 hours. A manager, who has been in the SES since 2004, received two major awards—performance awards of about $12,000 in fiscal year 2005 and about $11,500 in fiscal year 2006. A manager, who has been in the SES since 2004, received two major awards—individual cash awards of $4,300 in fiscal year 2005 and $5,000 in fiscal year 2006. A manager, who has been in the SES since 2004, received two major awards—a performance award of about $8,900 in fiscal year 2005 and an individual cash award of $5,000 in fiscal year 2006. OST’s six other current SES managers have received, at most, one major award. In fiscal year 2005, about 54 percent of OST’s SES managers received at least one major award. In fiscal year 2006, about 69 percent of OST’s SES managers received at least one major award. Table 5 compares the average performance award amounts for SES managers at OST with the average amounts at other bureaus and offices within Interior and other federal agencies. The average performance award amount of OST’s SES performance awards was higher than the average amounts of other bureaus and offices within Interior and other federal agencies in fiscal year 2001—according to the Special Trustee, the performance awards recognized these managers’ many hours of efforts to validate data for the implementation of the trust funds accounting system in 2000. However, the average amounts of OST’s performance awards for fiscal years 2002 through 2005 were generally lower than the average amounts provided to other bureaus and offices within Interior and other federal agencies—excluding fiscal year 2003, when the average amount of OST’s performance awards was slightly higher than the average amount provided to other bureaus and offices within Interior. Interior’s ERB approved each of the major awards provided to OST’s SES managers. Minerals Management Service officials told us that ERB considers supporting documentation and recommendations provided by the Performance Review Board, which is an Interior board that reviews only performance awards, in making its final determination. Woods.
The American Indian Trust Fund Management Reform Act of 1994 established the Office of the Special Trustee for American Indians (OST), within the Department of the Interior, to oversee the implementation of management reforms for funds--derived primarily from Interior's leasing of Indian lands--that Interior holds in trust for many Indian tribes and individuals. Specifically, the act directs that an integrated information system be developed that interfaces the trust fund accounting system with the land title records and asset management systems maintained by Interior's Bureau of Indian Affairs (BIA). GAO examined (1) OST's progress in implementing the trust fund management reforms and (2) the extent to which OST has used contractors in implementing these reforms. GAO reviewed OST's strategic plans and contracting documents and interviewed OST and BIA managers. OST has implemented several key trust fund management reforms, but has not prepared a timetable for completing its remaining trust reform activities and a date for OST's termination, as required by the 1994 Act. OST estimates that almost all key reforms needed to develop an integrated trust management system and to provide improved trust services will be completed by November 2007. Specifically, OST implemented a new trust funds accounting system for processing trust account funds, and BIA and OST are currently validating data for the trust asset and accounting management system for managing Indian land title records and leases for land with recurring income. However, the Special Trustee estimates that data verification for leasing activities will not be completed for all Indian lands until December 2009. OST's most recent strategic plan, issued in 2003, did not include a timetable for implementing trust reforms or a date for OST's termination. The Special Trustee notes that many OST functions, including trust fund operations, trust records management, and appraisal services, need to be performed after reforms are completed. If OST is terminated, these responsibilities would have to be transferred to another Interior office. OST plans to reduce expenditures primarily by terminating contracts once trust reforms are completed. However, OST has not yet developed a workforce plan that reexamines the expenditures and staffing levels needed for trust fund operations once trust reforms are completed. OST has used contractors to perform many of its trust reform activities as a way to minimize the size of its permanent staff. In fiscal years 2004 and 2005, OST allocated $89.7 million, or nearly 21 percent, of its appropriated funds to contracting. About 66 percent of contracting dollars from these 2 fiscal years went to two firms. Over $31 million during this period went to the largest contractor, an Indian-owned 8(a) small business, by adding task orders through an existing contract. OST has primarily relied on Interior's National Business Center to award and manage contracts.
IRS administers America’s tax laws and collects the revenues that fund government operations and public services. In fiscal year 2006, IRS collected more than $2.5 trillion in revenue. IRS’s Taxpayer Service and Enforcement programs generate more than 96 percent of the total federal revenue collected for the U.S. government. Total federal revenues have fluctuated from roughly 16 to 21 percent of gross domestic product between 1962 and 2004. Given the amount of federal revenue collected by IRS, a disruption of IRS operations could have great impact on the U.S. economy. The IRS headquarters building is located in Washington, D.C., and houses over 2,200 of the agency’s estimated 104,000 employees. The headquarters building contains the offices of IRS executive leaders, such as the Commissioner and deputy commissioners, and headquarters personnel for 14 of the agency’s 17 individual business units. On June 25, 2006, the IRS headquarters building suffered flooding during a period of record rainfall and sustained extensive damage to its infrastructure. The subbasement and basement were flooded, and critical parts of the facility’s electrical and mechanical equipment were destroyed or heavily damaged. The subbasement—which contained equipment such as electrical transformers, electrical switchgears, and chillers—was submerged in more than 20 feet of water. In addition, the basement level— which housed the building’s fitness center, food service canteens, computer equipment, and the basement garage—was flooded with 5 feet of water. As a result of the flood damage, the building was closed until December 8, 2006. In response to the flood and the closure of the building, IRS headquarters officials reported activating several of the agency’s emergency operations plans. Over 2,000 employees normally assigned to the headquarters building were relocated to other facilities throughout the Washington, D.C., metropolitan area. Although the flood severely damaged the building and necessitated the relocation of IRS employees to alternate office space, particular circumstances limited potential damage and made response and recovery activities easier: No employees were injured, killed, or missing as a result of the flood. Damage was limited to the basement and subbasement levels, and employees were able to enter the building to retrieve equipment and assets 5 days following the flood. IRS and the General Services Administration were able to identify and allocate alternate work space to accommodate all displaced employees, not just those considered critical or essential. According to IRS status reports following the flood, facility space was provided for critical personnel within 10 days and for all headquarters employees within 29 days. Table 1 provides a time line of activities following the flood. The Treasury Inspector General for Tax Administration also reviewed the IRS response to the flooding. According to the Inspector General’s reports, IRS adequately protected sensitive data and restored computer operations to all employees approximately 1 month following the flood. In addition, he reported that the flood caused no measurable impact on tax administration because of the nature of the work performed at this building and the contingency plans that IRS had in place. Finally, he reported that IRS paid $4.2 million in salary costs for 101,000 hours of administrative leave granted to IRS personnel following the flooding. While $3 million was paid for administrative leave during the first week following the flooding, the amount paid for administrative leave decreased in subsequent weeks. IRS headquarters has multiple emergency operations plans that if activated, are intended to work in conjunction with each other during emergencies. These plans include a suite of business continuity plans comprised of, among others, a business resumption plan for each IRS business unit and an Incident Management Plan. In addition, IRS has a COOP plan for emergency events affecting IRS executive leadership and essential functions. Table 2 summarizes the IRS emergency operations plans and their purposes. FEMA developed FPC 65 to provide guidance to federal executive branch departments and agencies in developing contingency plans and programs to ensure the continuity of essential agency operations. All federal executive branch agencies are required to have such a capability in place to maintain essential government services across a wide range of all hazard emergencies. This guidance defines the elements of a viable continuity capability for agencies to address in developing their continuity plans. Table 3 summarizes eight general elements of federal continuity guidance that agency plans should address. IRS supplemented federal guidance with sections of its Internal Revenue Manual—a document outlining the agency’s organization, policies, and procedures—related to business resumption plans. Similar to the federal continuity guidance, the Internal Revenue Manual outlined minimum requirements for business resumption plans, including the need to identify people and resources to perform critical functions. The IRS headquarters emergency operations plans we reviewed collectively addressed several of the general elements of guidance identified in FPC 65. For example, the plans adequately identified the people needed to continue performing essential functions and had established procedures for activation. However, other elements were not addressed or were addressed only in part. Specifically, IRS identified two separate lists of essential functions—critical business processes and essential functions for IRS leadership—within its plans but only prioritized one of the lists. Furthermore, although the COOP plan outlined provisions for tests, training, and exercises, neither the business resumption plans we reviewed—from Criminal Investigation (CI), Wage and Investment (W&I), and Chief Counsel—nor the Incident Management Plan outlined the need to conduct such activities. While IRS’s Office of Physical Security and Emergency Preparedness provided overall guidance to business units on their business resumption plans, the guidance was inconsistent with the federal guidance on several elements, including the preparation of resources and facilities needed to support essential functions and requirements for regular tests, training, and exercises. Until IRS requires all of the plans that contribute to its ability to quickly resume essential functions to fully address federal guidance, it will lack assurance that it is adequately prepared to respond to the full range of potential disruptions. Inconsistencies between IRS’s business resumption plans and federal guidance can be attributed in part to gaps in IRS internal guidance. IRS provided its business units with guidance on developing business resumption plans, including general guidance within IRS’s Internal Revenue Manual and a business resumption plan template disseminated to the business units. The Internal Revenue Manual provided IRS business units with minimum requirements of elements to include in their plans, such as identifying critical personnel and resources. In addition, the Office of Physical Security and Emergency Preparedness disseminated a business resumption plan template to business units that included, among other things, sections for identifying the critical business processes and personnel to support the resumption of critical activities. IRS’s internal guidance addressed several of the elements of a viable continuity capability. For example, the Internal Revenue Manual stated that business resumption plans should include a list of critical personnel, and the business resumption plan template asked each business unit to list its critical team leaders and members and their contact information. Similarly, the IRS guidance adequately addressed execution and resumption. For other continuity planning elements, however, IRS guidance on developing business resumption plans was inconsistent with federal guidance. Specifically, IRS guidance on resources directed business units to identify their need for vital records, systems, and equipment. However, rather than procuring those resources before an event occurs, as outlined in federal guidelines, IRS guidance assumed that business units will work with teams outlined within the Incident Management Plan to acquire those resources following a disruption. Similarly, IRS directed business units to identify alternate work space requirements for personnel, but not to prepare or acquire them until after a disruption occurs. Finally, IRS guidance did not address the need for tests, training, or exercises involving the critical personnel identified within business resumption plans. Officials from the Office of Physical Security and Emergency Preparedness stated that it was the responsibility of business units to conduct adequate tests, training, and exercises of their business resumption plans. Officials further stated that the IRS response to the June 2006 flooding validated the use of its incident command structure outlined in its Incident Management Plan. Although the incident command structure can be effective at securing needed resources over time, IRS will be able to respond to a disruption more quickly if it prepares necessary resources and facilities before an event occurs. This is especially critical in the case of business processes that need to be restored within 24 to 36 hours. Similarly, if personnel are unfamiliar with emergency procedures because of inadequate training and exercises, the agency’s response to a disruption could be delayed. IRS officials largely relied upon the Incident Management Plan to direct their response to the emergency conditions created by the June 2006 flooding. This plan guided officials in establishing roles and responsibilities for command and control of the overall resumption effort and a capability for the procurement of alternate facility space and equipment. Business unit officials were initially guided by their business resumption plans, but later response activities differed from those plans because of the circumstances resulting from the event. According to IRS headquarters officials, the headquarters COOP plan was not activated because local space availability made moving the executive leadership to the alternate COOP facility unnecessary and the safety of the leadership was not at risk. We previously reported that in responding to emergencies, roles and responsibilities for leadership must be clearly defined and effectively communicated in order to facilitate rapid and effective decision making. The IRS Incident Management Plan provided agency officials with clear leadership roles and responsibilities for managing the response and recovery process, including the procurement of temporary facility space and equipment necessary to continue critical business processes. Consistent with the plan, the Incident Commander acted as the leader of IRS headquarters response and recovery activities immediately following the flood. To assist in managing the incident, the Incident Commander activated members of the IRS Incident Management Team and other supporting sections, whose roles and responsibilities were outlined in the plan. These individuals included business resumption team leaders from each of the IRS business units and personnel from the central service divisions, such as Real Estate and Facilities Management and Modernization and Information Technology Services. According to minutes from Incident Management Team meetings held in the days following the flood, the following Incident Management supporting teams were activated and provided the following contributions: 1. The Operations Section, responsible for conducting response and recovery activities, gathered information regarding the facility space and equipment requests from the IRS business units, as well as preferences on alternate work location assignments. 2. The Logistics Section, responsible for providing all nonfinancial logistical support, procured and allocated facility space and equipment to IRS business units. 3. The Planning Section, responsible for providing documentation of the emergency, documented decisions and conducted reporting. For example, the Planning Section prepared documents for hearings and maintained relocation schedules and information. 4. The Finance and Administrative Section, responsible for providing all financial support, provided assistance in monitoring agency costs and developing travel and leave policies. According to IRS status reports following the flood, facility space was provided for critical personnel within 10 days and for all headquarters employees within 29 days. The Incident Commander reported that the Incident Management Team and its supporting units stepped down approximately 2 months after the flood. The three business units we reviewed reported that their business resumption plans guided their initial responses to the flood. In later phases of their responses, the business units differed from their plans to account for conditions at the time, such as current work priorities and the availability of alternate office space for more staff than the minimum necessary to perform the most critical functions. The following sections outline how selected business units relied on their business resumption plans when responding to the flood. CI used its business resumption plan to (1) establish an internal command structure to coordinate emergency activities following the flood and (2) identify short-term facility space for selected employees. According to the CI business resumption executive, the business unit used alternate facilities previously identified within the CI business resumption plan to relocate personnel within the first 2 days. CI leadership determined which personnel would be placed first and at what locations, since its business unit’s resumption plan did not specify such information. According to the CI business resumption executive, after learning from the Incident Commander that relocation would be for a longer period and that alternate facility space was available to accommodate all displaced CI employees, CI officials submitted a request for facility space and equipment for all of their employees to the Incident Commander and Incident Management Team. In discussing lessons learned, the CI business resumption executive acknowledged that the unit’s plan primarily addressed relocation to alternate facilities for short-term emergencies rather than longer-term events like the flood, and that CI should work with IRS’s central organizations to better plan for relocation in such situations. Furthermore, the executive stated that better tests and exercises of the CI plan could assist in better preparing for a wider range of future emergencies. W&I officials used their plan to identify and prioritize critical tasks. W&I managers gathered at a previously scheduled off-site retreat the morning following the flood and conducted a review of the business unit’s resumption plan, according to the new W&I business resumption executive. The executive stated that the activity was particularly useful in addressing identified knowledge gaps in the wake of the prior W&I business resumption leader’s sudden death the day before the flood. Critical business processes and supporting tasks, initially prioritized within the plan, were adjusted to reflect the criticality of several tasks at that time of year. According to the business resumption executive, the revised list of critical business processes allowed W&I managers to identify critical personnel and resources, which were submitted to the Incident Management Team as facility space and resource requests. In addition, the executive stated that W&I managers established a system for placing employees in alternate work space based on their association with the prioritized tasks, although it was not reflected in the W&I business resumption plan. W&I created a document to capture lessons learned following the flood and established an internal business resumption working group to ensure a business resumption capability in all W&I field offices. As W&I officials did not anticipate the need to readjust tasks, one item discussed in the document addressed the need to create a rolling list of critical business processes and critical personnel, as processes and tasks will vary throughout the year. In addition, the W&I business resumption working group developed minimum requirements for all W&I plans and conducted a gap analysis of field office plans to identify areas for improvement. According to the W&I business resumption executive, the working group will conduct a training session for field office business resumption coordinators after the 2007 filing season. Although the Chief Counsel resumption efforts were led by people identified within its plan, the unit’s business resumption officials reported that use of the plan was limited because of the high-level content of the document. According to the Chief Counsel’s business resumption executive, the plan was written at a high level because it was expected that specific priorities would be determined by the active caseload at the time of the emergency. The executive stated that following the flood, Chief Counsel prioritized resumption activities based on the active caseload and the need to address emerging requirements, such as (1) ensuring that mail addressed to the business unit’s processing division was rerouted and processed at another facility and (2) supporting a specific court case being conducted in New York City because of its level of criticality and time sensitivity. The executive further stated that officials identified alternate work space in Chief Counsel offices in the Washington, D.C., metropolitan area and placed approximately 180 employees prioritized based on the organizational hierarchy. Chief Counsel submitted requests to the Incident Commander and Incident Management Team for facility space and resources for over 500 remaining employees. Although Chief Counsel was able to identify tasks, such as tax litigation, that were consistent with responsibilities outlined in its plan and procured facility space and resources for personnel, it established a task force that identified recommendations to improve the business unit’s plan in a report documenting lessons learned following the flood. Recommendations included measures to improve the prioritization of critical functions and people and outline provisions for mail processing. In addition, because Chief Counsel experienced delays in recovering a computer server that had not been identified in the business resumption plan but proved to be important following the flood, the task force addressed the need to ensure redundancy of information technology equipment. Chief Counsel is currently drafting an action plan to carry out the recommendations of the task force. In addition, a Chief Counsel business resumption official stated that agencywide tests and exercises of business resumption plans could assist in better integration of emergency efforts for a wider range of future emergencies. According to IRS headquarters officials, the headquarters COOP plan was not activated because local space availability made movement of executive leadership to the alternate COOP facility unnecessary and the safety of the leadership was not at risk. When the June 2006 flood occurred at the IRS headquarters building, the agency had in place a suite of emergency plans that helped guide its response. The agency’s Incident Management Plan was particularly useful in establishing clear lines of authority and communications, conditions that we have previously reported to be critical to an effective emergency response. Unit-level business resumption plans we reviewed contributed to a lesser extent and the headquarters COOP plan was not activated because of conditions particular to the 2006 flood. Specifically, damage to the building was limited to the basement and subbasement levels, and employees were able to enter the building to retrieve equipment and assets. In addition, alternate work space was available for all employees within a relatively short period, reducing the importance of identifying critical personnel. Such conditions, however, may not be present during future disruptions. The plans IRS had in place at the time of the flood did not address all of the elements outlined in federal continuity guidance. In particular, the IRS plans did not (1) prioritize all essential functions and set targets for recovery times; (2) outline the preparation of resources and alternate facilities necessary to perform those functions; and (3) develop provisions for tests, training, and exercises of all of its plans. In discussions on lessons learned from the flood response, IRS business unit officials recognized the need to incorporate many of these elements. Unless IRS addresses these gaps, it will have limited assurance that it will be prepared to continue essential functions following a disruption more severe than the 2006 flood. To strengthen the ability of IRS to respond to the full range of potential disruptions to essential operations, we are making two recommendations to the Commissioner of Internal Revenue: Revise IRS internal emergency planning guidance to fully reflect federal guidance on the elements of a viable continuity capability, including the identification and prioritization of essential functions; the preparation of necessary resources and alternate facilities; and the regular completion of tests, training, and exercises of continuity capabilities. Revise IRS emergency plans in accordance with the new internal guidance. The Commissioner of Internal Revenue provided comments on a draft of this report in a March 26, 2007, letter which is reprinted in appendix II. The Commissioner agreed with our recommendations. His letter notes that the agency is actively committed to improving its processes. Specifically the agency will (1) conduct a thorough gap analysis between FPC 65 elements and business continuity planning guidance; (2) update the Internal Revenue Manual guidance and business resumption plan templates to reflect areas of improvement resulting from the gap analysis; and (3) formally direct annual tests, training, and exercises of business resumption plans through the agency’s Emergency Management and Preparedness Steering Committee. Finally, the Commissioner stated that the agency will revise and implement its emergency plans based on the results of the aforementioned activities. As agreed with your staff, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its date. At that time, we will send copies of this report to the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. This report will also be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have questions on matters discussed in this report, please contact Bernice Steinhardt at (202) 512-6543 or [email protected], or Linda Koontz at (202) 512-6240 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributions to this report were made by William Doherty, Assistant Director; James R. Sweetman, Jr., Assistant Director; Thomas Beall; Michaela Brown; Terrell Dorn; Nick Marinos; and Nhi Nguyen. The objectives of this report were to evaluate how the Internal Revenue Service’s (IRS) emergency operations plans address federal guidance related to continuity planning and evaluate the extent to which IRS emergency operations plans contributed to the actions taken by IRS officials in response to the flood. To address how IRS emergency operations plans address federal guidance related to continuity planning, we obtained the IRS headquarters emergency operations plans that were available to agency officials at the time of the June 2006 flood. These included the Continuity of Operations (COOP) plan and a suite of business continuity plans, including the Incident Management Plan and 13 business resumption plans from business units affected by the flood. Although we also obtained the headquarters Occupant Emergency Plan, we did not evaluate its contributions to addressing the elements because its purpose is limited to outlining procedures for building occupants and emergency personnel in responding to threats that require building evacuations or shelter in place. We did not obtain the Disaster Recovery Plan, a contingency plan for the recovery of information technology equipment, because recovery of information technology equipment was addressed in a report from the Treasury Inspector General for Tax Administration. To evaluate IRS’s emergency operations plans in relation to federal guidance on continuity planning, we analyzed Federal Preparedness Circular (FPC) 65 to identify the elements needed to ensure the continuity of essential functions and compared IRS emergency operations plans to the resulting generalized list. Because FPC 65 covers all hazard emergencies, but provides continuity guidance specifically for agency COOP plans, we developed the general elements of guidance to be able to collectively evaluate all IRS emergency operations plans we obtained. From our analysis of FPC 65, we identified eight general elements of guidance related to developing a viable continuity capability. See table 3 for a listing and description of the elements. We reviewed IRS’s plans and analyzed how they collectively addressed or did not address these eight general elements of guidance. We also reviewed IRS-defined criteria and guidance for emergency operations plans, including sections of the Internal Revenue Manual— which provides guidance to IRS officials on developing several of the agency’s emergency operations plans—and an internal template provided by IRS’s Office of Physical Security and Emergency Preparedness, which is responsible for agencywide emergency planning and policy to guide plan development. Since each business unit within IRS headquarters has an individual plan for business resumption activities, we selected and examined 3 of 13 business resumption plans available for use during the flood from the 3 business units with the most employees affected by the flooding in the headquarters building. According to employee relocation lists from IRS following the flood, the 3 largest business units in the building are Criminal Investigation, Wage and Investment, and Chief Counsel, which collectively represent over 50 percent of the headquarters building employees. To address the extent to which IRS emergency operations plans contributed to the actions taken by IRS officials in response to the flood, we interviewed IRS officials responsible for the development, oversight, and implementation of the headquarters emergency operations plans. In our interviews, we asked IRS officials responsible for each emergency operations plan how the general elements identified in their respective plans guided their actions following the flood, if at all. To supplement the information gained from the interviews, we reviewed agency documentation related to emergency operations activities following the flood, including IRS status reports, employee relocation lists, and emergency operations team meeting minutes. In addition, we reviewed documentation regarding lessons learned from the flood, provided by various headquarters business units, and obtained any updates or changes to emergency operations plans following the flood. We conducted our review in accordance with generally accepted government auditing standards from July 2006 through March 2007.
On June 25, 2006, the Internal Revenue Service (IRS) headquarters building suffered flooding during a period of record rainfall and sustained extensive damage to its infrastructure. IRS officials ordered the closure of the building until December 2006 to allow for repairs to be completed. IRS headquarters officials reported activating several of the agency's emergency operations plans. Within 1 month of the flood, over 2,000 employees normally assigned to the headquarters building were relocated to other facilities throughout the Washington, D.C., metropolitan area. GAO was asked to report on (1) how IRS emergency operations plans address federal guidance related to continuity planning and (2) the extent to which IRS emergency operations plans contributed to the actions taken by IRS officials in response to the flood. To address these objectives, GAO analyzed federal continuity guidance, reviewed IRS emergency plans, and interviewed IRS officials. The IRS headquarters emergency operations plans that GAO reviewed--the headquarters Continuity of Operations (COOP) plan, Incident Management Plan, and three selected business resumption plans--collectively addressed several of the general elements identified within federal continuity guidance for all executive branch departments and agencies. For example, the plans adequately identified the people needed to continue performing essential functions. However, other elements were not addressed or were addressed only in part. Specifically, IRS had two separate lists of essential functions--critical business processes and essential functions for IRS leadership--within its plans, but prioritized only one of the lists. Furthermore, although the COOP plan outlined provisions for tests, training, and exercises, none of the other plans GAO reviewed outlined the need to conduct such activities. While IRS provided overall guidance to its business units on their business resumption plans, the guidance was inconsistent with the federal guidance on several elements, including the preparation of resources and facilities needed to support essential functions and requirements for regular tests, training, and exercises. The IRS Incident Management Plan was particularly useful in establishing clear lines of authority and communications in response to the flooding. Unit-level business resumption plans GAO reviewed contributed to a lesser extent, and the headquarters COOP plan was not activated because of conditions particular to the 2006 flood. Specifically, damage to the building was limited to the basement and subbasement levels, and employees were able to enter the building to retrieve equipment and assets. In addition, alternate work space was available for all employees within a relatively short period, reducing the importance of identifying critical personnel. While its plans helped guide IRS's response to the conditions that resulted from the flood, in more severe emergency events, conditions could be less favorable to recovery. Consequently, unless IRS fills in gaps in its guidance and plans, it lacks assurance that the agency is adequately prepared to respond to the full range of potential disruptions.
Calls to reform the UN began soon after its creation in 1945. Despite cycles of reform, UN member states have had concerns about inefficient management operations. As one of the 192 member states, the United States played a significant role in promoting UN reform, calling for financial and administrative changes. The United States, through the Department of State and the U.S. Mission to the United Nations, continues to take measures to advance reform of UN management processes. In 1997 and 2002, the Secretary-General proposed two separate sets of management reform initiatives in the areas of human resources, budgeting, and human rights. In July 1997, the Secretary-General proposed a broad reform program to transform the UN into an efficient organization focused on achieving results as it carried out its mandates. Although the Secretary- General does not have direct authority over specialized agencies and many funds and programs, the reforms at the Secretariat were intended to serve as a model for UN-wide reforms. In May 2000, we reported that while the Secretary-General had substantially reorganized the Secretariat’s leadership and structure, he had not yet completed reforms in human resource management and planning and budgeting. In September 2002, to encourage the full implementation of the 1997 reforms, the Secretary- General released a second set of reform initiatives with 36 reform actions, some expanding on previous reform initiatives introduced in 1997 and others reflecting new priorities for the organization. In February 2004, we reported that 60 percent of the 88 reform initiatives in the 1997 agenda and 38 percent of the 66 initiatives in the 2002 agenda were in place. In 2004-2005, a series of UN and expert task force reports recommended the need for comprehensive reform of UN management and the UN human rights apparatus. These studies included a 2004 report of a high-level panel convened by the Secretary-General to recommend ways to strengthen the UN, a March 2005 Secretary-General report to the General Assembly, a June 2005 report by a task force mandated by the U.S. Congress to recommend how to improve the effectiveness of the organization, as well as several reports of the Independent Inquiry Committee established to investigate the Oil for Food Program. In September 2005, world leaders gathered at the UN World Summit in New York City to discuss global issues such as UN reform, development, and human rights, as well as actions needed in each of these areas. The outcome document from the World Summit, endorsed by all members of the UN, outlines broad UN reform efforts in areas such as oversight and accountability, and human rights. The document also called for the Secretary-General to submit proposals for implementing reforms to improve the management functions of the Secretariat. In April 2006, we reported on weaknesses in the UN’s internal oversight unit and procurement system, both of which have been identified as important areas for reform. In the internal oversight area, we found that UN funding arrangements adversely affect OIOS’s budgetary independence and compromise the office’s ability to audit high-risk areas. For example, OIOS depends on the resources of the funds, programs, and other entities it audits, and the managers of these programs can deny OIOS permission to perform work or not pay OIOS for services. In the procurement area, we found that UN procurement resources are vulnerable to fraud, waste, and abuse because of weaknesses affecting the control environment. For example, the UN has not established a single organizational entity or mechanism capable of effectively and comprehensively managing procurement. In addition, the UN has not demonstrated a commitment to improving the professionalism of its procurement staff in the form of training, a career development path, or other key human capital practices critical to attracting, developing, and retaining a qualified professional workforce. The management reform decision-making process at the UN involves multiple entities. Member states or the Secretary-General can introduce management reform initiatives at the UN. The Secretary-General can implement certain management improvements that are within his authority. In addition, the Secretary-General submits proposals to the General Assembly. In these cases, the Advisory Committee on Administrative and Budgetary Questions (ACABQ), a subsidiary organ of the General Assembly, reviews the proposal. The ACABQ then advises and reports to the Administrative and Budgetary Committee (the Fifth Committee), the General Assembly’s committee for administrative and budgetary matters that is composed of all 192 member states. The Fifth Committee holds discussions on the proposals and makes its recommendation to the General Assembly. The General Assembly makes the final decision. For the past 20 years, most decisions in the Fifth Committee and in the General Assembly have been made by consensus among all the member state representatives. The UN has initiated reforms in five key areas: (1) modernizing the management operations of the Secretariat, (2) improving oversight, (3) promoting ethical conduct, (4) reviewing and updating programs and activities, and (5) creating a Human Rights Council. However, most efforts are awaiting General Assembly review or have been recently approved. In addition, many proposed or approved reforms do not have an implementation plan that establishes time frames and cost estimates. Appendix II summarizes the status of major management reforms. Proposals to improve the management operations of the Secretariat have either been approved or are awaiting General Assembly review. To improve the management operations of the Secretariat, the September 2005 outcome document requested that the Secretary-General develop proposals to ensure that the existing policies, regulations, and rules used to manage budgetary, financial, and human resources are aligned with the current needs of the UN. In response, the Secretary-General submitted a report to the General Assembly in March that included 23 proposals to improve the UN’s effectiveness. However, the ACABQ recommended that the Secretary-General provide more details, including specific costs and administrative implications, and time lines for implementation. In April 2006, members of the Fifth Committee voted and approved a proposal introduced by the G-77 countries that the Secretary-General elaborate on the proposals and give concrete examples of how the initiatives could correct deficiencies and make the organization’s work more effective. The vote signified the breakdown of the policy of making decisions by consensus, a practice used for 20 years. Further, the United States expressed concern that the G-77’s proposal was a way to scale back the reforms proposed in the Secretary-General’s March 2006 report. In May 2006, the General Assembly voted and approved a resolution that incorporated the recommendations made by the Fifth Committee. See figure 1 for key dates for reform initiatives related to improving the management operations of the Secretariat. In response to the General Assembly’s request for more information, the Secretariat issued seven detailed reports in May and June 2006 that included information on various initiatives, such as information and communication technology, financial management practices, and procurement reform. In July 2006, member states approved a resolution that, according to UN officials and member state representatives, was a positive step toward addressing several management reform initiatives. The status of several reforms to improve the management operations of the Secretariat is as follows: Since the Secretary-General has limited authority to shift resources between programs without the approval of the member states, the Secretary-General in his March 2006 report noted that more flexibility in this area could enable the Secretariat to respond more effectively to the changing needs of the organization. In July 2006, the General Assembly gave the Secretary-General, on an experimental basis, limited discretion over budgetary commitments up to $20 million per biennium. The impact of this reform will be reviewed in 2009. According to the Secretary-General, the UN has outdated and fragmented information technology systems that have limited capacity for processing and sharing data. Moreover, at least six departments have disparate information technology units with no integrating mechanism in place. The Secretary-General’s March 2006 report recommended the creation of a chief information technology officer position to oversee the creation and implementation of an information management strategy for the Secretariat. In July 2006, the General Assembly agreed to create the position of a chief information technology officer and upgrade certain elements of the UN’s computer systems. In addition, the Secretary- General’s information technology detailed report did not include a comprehensive implementation plan for this proposal. According to State and UN officials, the Secretary-General plans to submit a comprehensive report that includes cost estimates in March 2007. GAO and others have reported that UN procurement resources are unnecessarily vulnerable to fraud, waste, and abuse. The Secretariat’s June 2006 procurement report included several proposals that could be implemented over an 18-month period to strengthen UN procurement practices. However, the report does not specify milestones that need to be completed during the 18 months. The General Assembly is expected to discuss this report in fall 2006. In the meantime, in July 2006, the General Assembly authorized funding of approximately $700,000, which UN officials plan to use for six new temporary procurement positions for 6 months. However, according to a senior U.S. official, these temporary posts are not sufficient to address weaknesses in the procurement system, and qualified procurement officers are not likely to accept temporary jobs. As of September 2006, one temporary procurement staff member had been hired. According to the Secretary-General, staff skills are not aligned with the current needs of the organization. The Secretary-General’s March 2006 report included proposals to improve recruitment processes, facilitate staff mobility between headquarters and field offices, and dedicate resources to conduct a one-time staff buyout. In late September 2006, the Secretary-General issued a detailed human resources report. The General Assembly is expected to discuss the report in fall 2006. Some of the proposed or approved reforms to improve the operations of the Secretariat do not have an implementation plan that establishes time frames and cost estimates. Of the Secretary-General’s seven detailed reports issued in May and June 2006, only the proposal for adoption of the International Public Sector Accounting Standards includes a detailed timetable for implementation. The Secretary-General’s June 2006 procurement report included several proposals that could be implemented over an 18-month period, but the report does not include specific milestones. Reforms proposed to create an independent oversight advisory committee and to strengthen the capacity of OIOS are awaiting review by the General Assembly in fall 2006. In the outcome document, member states agreed to consider the creation of an independent oversight advisory committee. In November 2005, the Secretary-General proposed the creation of the Independent Audit Advisory Committee and drafted provisional terms of reference for this entity. In December 2005, the General Assembly approved the creation of the committee and requested an external evaluation of the proposed terms of reference. In addition, in the September 2005 outcome document, member states recognized the urgent need to strengthen the expertise, capacity, and resources of OIOS’s auditing and investigative functions We and others have reported that OIOS’s independence and ability to perform as the principal auditing and investigative body of the UN have been hampered by the UN’s funding process and lack of resources. Moreover, in the outcome document, member states requested an independent external evaluation of the UN’s auditing and oversight system. The Secretary-General submitted the external evaluation in July 2006. See figure 2 for key dates associated with oversight reform initiatives. The July 2006 external evaluation reviewed the draft terms of reference for the Independent Audit Advisory Committee and recommended several changes, specifically with respect to the number, appointment criteria, terms, and compensation of members of the committee. The external independent evaluation also recommended the complete and prompt implementation of the committee. In addition, the evaluation recommended that the committee be responsible for presenting the budget for OIOS to the Fifth Committee, thereby relieving the ACABQ of its advisory role in this regard. The July 2006 external evaluation included a detailed review of OIOS that found that OIOS is not able to function effectively under its current mandate and made 23 recommendations in nine areas to strengthen its capacity. The external review stated that OIOS’s current structure is impeding its independence and reducing its effectiveness. It also stated that OIOS should focus on internal auditing and recommended shifting several OIOS functions, such as investigations, to departments in the Secretariat. However, various UN and U.S. officials stated that a shift of functions such as that proposed in the external review could significantly diminish the UN’s oversight functions and the independence of its investigations. For example, these officials said that moving investigations to the Secretariat could create a potential conflict of interest. However, according to UN Secretariat officials, the Secretariat has a positive view of the results of the independent external review and supports most of the recommendations, and not all UN Secretariat officials view the proposed recommendation as a way to diminish the UN’s oversight functions or the independence of its investigations. In addition, in a report submitted to the Secretary-General in July 2006, OIOS strongly disagreed with the restructuring proposals but recognized the need to reassess the functions and work processes of its Investigations Division. OIOS indicated that it will undertake a review of that division that will be completed by the end of 2006. OIOS’s July 2006 report included its own proposals for strengthening its capacity. The OIOS report indicated that some recommendations of the external review will require consideration by the General Assembly, but that many are being considered for implementation under the authority of the Under Secretary-General for Internal Oversight Services. The OIOS report discussed 14 of the recommendations made by the external review and generally agreed with most of them, such as training of OIOS staff, human resource management, and information and communications technology. However, as discussed above, OIOS strongly disagreed with the recommendations that would restructure it. The UN established an ethics office in January 2006 but, as of September 2006 it continues to operate with interim staff, and some experts, including a panel commissioned by a UN staff union to review the UN’s internal justice system, have questioned the sufficiency of the number of staff in the office. Since January 2006, the office’s six interim staff members have developed and implemented activities associated with the ethics office’s four areas of responsibility: (1) administering the UN’s financial disclosure program, (2) implementing the new UN whistleblower protection policy, (3) providing guidance to staff on ethics issues, and (4) developing ethics standards and training. For example, the interim staff members have undertaken preliminary reviews of claims of retaliation for whistleblowing and have collected financial disclosure forms from UN managers. As the office is new and in the process of hiring permanent staff, it is too early to determine whether the office will be able to fully carry out its mandate. See figure 3 for key dates associated with the establishment of the ethics office. Before creating the ethics office, the UN Secretariat did not have a way to coordinate ethics-related initiatives within the organization and to ensure that all staff are aware of and updated on ethics issues. The 2005 outcome document specifically requested that the Secretary-General develop a detailed proposal for an independent ethics office. The Secretary-General developed and submitted this proposal in November 2005, and the General Assembly approved it in December 2005. The ethics office began operating in January 2006 as an independent entity reporting directly to the Secretary-General and by March 2006 it was staffed with one director, four staff members, and a consultant, all temporarily assigned to the office. These staff have been establishing and documenting the procedures the office follows in carrying out its duties. The UN is in the process of hiring permanent staff to replace the interim staff. The office has four main areas of responsibility and has made some progress in fulfilling each as follows: The ethics office is responsible for administering the UN’s financial disclosure program to ensure that staff comply with applicable conflict of interest rules and standards of conduct. Designated UN staff— those at and above the director level and all staff carrying out procurement and investment functions—are required to file an annual confidential statement of their financial interests. This policy applies to about 1,800 UN staff and as of July 31, 2006, the ethics office had received 90 percent of their financial disclosure statements. The ethics office is currently reviewing bids from contractors to carry out the review and audit of these forms. The Secretariat recommended that the review be conducted by independent financial experts, as is the practice at the World Bank and the International Monetary Fund (IMF), to safeguard the confidentiality of senior officials’ private financial information. The ethics office will keep these financial disclosure forms confidential, but a report by a panel of experts reviewing the UN’s internal justice system recommended that the office maintain the forms in a public register. The ethics office is implementing the UN’s new whistleblower protection policy, which took effect in January 2006. When a staff member contacts the ethics office with a complaint that he or she has been retaliated against for reporting misconduct, the office conducts a preliminary review to determine if the case should move forward for formal investigation by OIOS. The ethics office staff review the evidence presented by the claimant, interview the party accused of retaliation, and talk to other staff involved. If the ethics office determines that the case is an interpersonal problem within a particular office, rather than a case of retaliation for whistleblowing, it advises the staff member concerned of the existence of the Office of the Ombudsman and other informal conflict resolution mechanisms within the organization. If a case of retaliation is established after investigation by OIOS, the ethics office takes into account any recommendations made by OIOS and recommends appropriate measures aimed at correcting the negative consequences suffered as a result of the retaliatory action. As of July 31, 2006, the office had received 45 complaints of retaliation for reporting misconduct, one of which they submitted for further investigation. Ethics office staff told us that they track all whistleblowing complaints that are brought to their attention, including those referred to other offices. Staff also said that the time they spend on each case of whistleblower retaliation varies from several hours to more than 45 days. As part of its regular duties, the ethics office provides confidential guidance to staff on ethics issues. To fulfill this responsibility, the ethics office operates an ethics helpline to answer questions from and provide advice to UN staff. UN staff have used the helpline to make whistleblower retaliation complaints. Staff can also contact the office in person, by mail or e-mail, or by fax. The ethics office is responsible for developing ethics standards and content for training, which all UN staff will be expected to take annually, and it is working to provide clear guidance to staff on ethics regulations, rules, and standards. The Office of Human Resources Management, in consultation with the ethics office staff, has developed a half-day ethics workshop for all staff and has worked to ensure that ethics issues are incorporated into courses on other topics, such as procurement. The ethics office has developed an intranet site for UN staff that provides general information about the office as well as UN ethics issues and standards. While the interim staff in the ethics office have been undertaking activities consistent with their responsibilities, questions have been raised about the capacity of this office to fulfill its mandate. One nongovernmental organization said the UN’s whistleblower protection policy created a new benchmark for such policies in other intergovernmental organizations, such as the World Bank and IMF. However, it questioned the UN’s implementation of the policy, citing the low number of staff in the ethics office and the amount of time it is taking to conduct preliminary reviews of whistleblower retaliation cases. In addition, in a report endorsed by a UN staff union, a commission of experts criticized the UN’s implementation of the whistleblower protection policy and made several recommendations that, if adopted, would change the responsibilities and structure of the ethics office. The U.S. Permanent Representative to the UN has also cited the UN staff union’s concerns about the capacity of the ethics office to fulfill its responsibilities. The appropriate number of staff assigned to the ethics office has been in question since the office’s inception. The Secretariat originally requested funding for 16 staff positions for the ethics office, including liaison posts in UN offices in Vienna, Nairobi, and Geneva, to provide the office with greater proximity to the two-thirds of UN employees located in field offices around the world. However, the General Assembly, following the recommendation of the ACABQ, approved funding for only six positions, with no posts in the field. The ACABQ reported that the office could operate with fewer staff than requested, given the office’s uncertain workload at its inception, and that its workload would be reduced after the initial work of developing standards and training was complete. The Special Advisor to the Secretary-General for the ethics office, who is overseeing the new office, stated that the number of staff assigned to the office is currently appropriate. The interim staff said that the office needs more resources, particularly additional staff, given its number of responsibilities and activities. A representative from a nongovernmental organization with expertise in whistleblower protection also stated that the ethics office has too few resources to carry out its duties. In addition, the panel of experts commissioned by a UN staff union to review the UN’s internal justice system stated that it is critical that the ethics office be given adequate resources, including representation in the UN’s regional offices, to fulfill its responsibilities. The ethics office submitted a status report to the General Assembly in September 2006 that suggested that the office may need additional staff and resources in the future. Member state review of all UN programs and activities has been slow because of disagreements on both the scope and process; therefore, it is unlikely that the December 2006 deadline to complete the review will be met. In the 2005 outcome document, member states requested a review of all UN programs and activities, or mandates, that were created 5 or more years ago (see fig. 4 below for key dates in the review process) to strengthen and update UN programs and activities to more accurately reflect the current needs of the organization. The UN does not have a system for regularly evaluating the effectiveness of its mandates, which make up its main body of work. The General Assembly, Economic and Social Council, and the Security Council each adopt new mandates every year on many of the same issues, which can lead to interrelated and overlapping mandates. As a result, the Secretariat’s implementation of these mandates may be uncoordinated and inconsistent. UN member states agreed at the 2005 World Summit to undertake in 2006 a review of UN mandates older than 5 years to update the UN’s programs and activities so that they respond to the current needs of member states. Member states did not establish milestones for this review, but said it should be completed by 2006. In March 2006, the Secretary-General issued a report that provided a framework for conducting this review, including a recommendation to conduct the review in two phases, and compiled an electronic inventory of about 9,000 total mandates, over 6,900 of which are older than 5 years, originating from the three principal UN organs—the General Assembly, the Economic and Social Council, and the Security Council. The General Assembly, which has responsibility for about 80 percent of the mandates, began discussions on the mandate review process in November 2005 and started substantive discussions on specific mandates in April 2006. The Security Council and Economic and Social Council began their respective reviews in May 2006. During these discussions, countries and groups of countries made proposals on the process for the review and on how to handle specific mandates. For example, one country proposed that the mandate review process involve roundtable discussions and informal debates. Another proposal, with regard to a specific mandate, was to consolidate the Secretariat’s working papers on individual and small island territories. During the discussions, some countries requested more information from the Secretariat on certain mandates, such as how one mandate might be duplicative of another, or which UN departments or entities are involved in implementing each mandate. Throughout the review process, member states have disagreed about which mandates to include in the review and what to do with any savings generated by the potential elimination or consolidation of mandates, which has led to slow and limited progress. Members of the G-77 contend that the scope of the review should include only those mandates older than 5 years that have not been renewed since they were adopted. This represents about 626 mandates, or 7 percent of the total number of mandates (see fig. 5). The United States and other developed countries, including Japan, Australia, Canada, and the European Union, argue that the review should include all mandates older than 5 years, whether or not they have been renewed. Using these criteria, the review would include an additional 6,347 mandates. The G-77 established several criteria under which it would consider reviewing mandates that are older than 5 years: (1) member states must first agree that any savings derived from the mandate review will be reinvested in the areas from which they were derived, or in UN activities in the development area and (2) all politically sensitive mandates must be excluded from the review process. The United States has stated that a decision about the use of cost savings from the mandate review should be made once the review is complete. In addition, the United States maintains that no mandates older than 5 years, including those that are controversial, should be excluded from the review. Despite disagreement on which General Assembly mandates to review, member states decided in June 2006 to move forward with the first phase, which consists of reviewing 399 mandates that are older than 5 years and have not been renewed within the last 5 years. Mandates that are older than 5 years and have been renewed could be reviewed in a second phase. Mandates in phase one include completed projects, such as a 1965 resolution requesting that the Secretary-General convene a conference on the World Food Program. Although most of the mandates in this category do not require any further action or resources from the UN, member states could only agree to set aside 74 of them, which they classified as completed, meaning they have been acted upon and completely implemented and do not require further action at this time. Additionally, they decided that 33 mandates are not applicable to the review. The remaining 292 mandates included in phase one may be reviewed in phase two if member states believe they need further discussion. See table 1 for details on the status of mandates considered in phase one. As of September 2006, after beginning discussions on specific mandates in April 2006, member states had not agreed to change, eliminate, or retain any mandates. On September 1, 2006, leaders of the working group on mandate review developed a proposal suggesting terms under which member states would move forward into phase two of the review, but as of the end of September member states had not accepted it. The proposal suggests that member states reallocate within the UN budget any savings from mandate review according to normal budgetary procedures and that they reinvest any savings from development activities into other development activities. In addition, the proposal recommends that member states agree to address politically sensitive mandates carefully and take into account the positions of member states concerned. Given the volume of mandates still to be discussed and the contentious nature of the review process, the prospects for completing the review by the end of 2006 are unlikely. In March 2006, the UN voted to create a new UN Human Rights Council to replace the Commission on Human Rights; however, significant concerns remain about the council’s structure. UN member states generally agreed that the Commission on Human Rights should be improved as it was no longer seen as a credible institution for protecting human rights, due to a number of weaknesses. For example, according to human rights organizations, countries known to be human rights violators were consistently selected for membership to the commission and used their membership to protect themselves against criticism of their human rights records. Furthermore, the commission did not criticize the actions of several countries that were found to be abusers of human rights, including Sudan, Saudi Arabia, and Zimbabwe. As a result, the member states agreed at the 2005 World Summit to create a new Human Rights Council that would improve upon these deficiencies. UN member states voted to establish the council in March 2006 and elected members in May 2006. (See fig. 6 for key dates for the Human Rights Council.) In establishing the new Human Rights Council, UN member states aimed to address some of the deficiencies in the 53-member Commission on Human Rights. The 47 members of the new council must be elected individually to the body by a majority of UN members. Previously, candidates were grouped into slates of countries representing regions, and members would vote on the entire slate rather than for an individual country on the slate. The United States sought a significantly smaller body and advocated that to gain membership on the council, members should be elected by the higher standard of a two-thirds majority, rather than an absolute majority, to make it more difficult for repressive countries that have not demonstrated a commitment to human rights to gain seats on the council. Members can now be suspended from the council by a two-thirds majority vote if they are found to have committed gross violations of human rights. When voting for candidates to the council, UN member states are instructed to take into account each country’s human rights record, a measure that was not called for when voting for candidates to the commission. The United States wanted to automatically exclude from council membership any country under Security Council sanctions, but that provision was not included in the final design of the body. When elections to the council were held in May 2006, several countries with questionable human rights records were elected, including China, Russia, and Cuba. However, other countries that previously served on the commission and have questionable human rights records did not even run for election, including Zimbabwe, Sudan, and North Korea. In addition, Iran campaigned for a seat on the Council but did not win. The Human Rights Council will also operate differently from the commission. The council will meet more frequently and can more readily call special sessions to address emerging human rights situations than could the commission. The council will meet at least three times a year for a total of 10 weeks, while the commission met once a year for a total of 6 weeks. Furthermore, the council is required to periodically review the human rights records of all UN member states, a procedure the commission lacked. Members of the council will be the first to undergo these reviews and will be required to cooperate with investigators. The council is currently developing the procedures it will follow when conducting the reviews. Finally, member states made the council a subsidiary organ of the General Assembly, elevating it from the commission’s status as part of the Economic and Social Council. While the United States voted against the creation of the new Human Rights Council, stating that it did not sufficiently improve upon the former commission, many nongovernmental organizations and other UN members have stated that the council is better equipped than the commission was to address urgent, serious, and long-running human rights situations around the world. Of the UN member states participating in the vote on the creation of the council, 170 voted in favor, while 4 voted against. The United States did not run for election to the body but has agreed to provide funding for it. Representatives from one group of member states said that they were disappointed the United States did not run for election because it was important to have the United States on the council from its inception, to show support for the new body. The council meets in Geneva and met for the first time in June 2006 and a second time in September 2006. It plans to meet again in November 2006. The council held special sessions in summer 2006 on the situation of human rights in Palestine and other Arab territories. It is too early to determine the impact of the new council on the UN and human rights worldwide. We identified several factors that may impede the UN’s progress toward full implementation of management reforms: (1) considerable disagreement within the General Assembly over their overall implications; (2) absence of an implementation plan for each reform that includes time frames and cost estimates; and (3) administrative guidance that may complicate the process of implementing certain human resource initiatives. Disagreement between G-77 and developed countries over the broader implications of management reforms may affect the UN’s ability to fully implement them. According to UN and member state officials, the G-77 is concerned that some of the reforms could increase the authority of the Secretariat at the expense of the General Assembly, thus decreasing the G-77’s influence over UN operations. Further, according to several UN and member state officials, most developed countries view management reform as a way to increase organizational effectiveness, whereas the G-77 countries perceive that developed countries view certain reform initiatives, such as mandate review, as cost-cutting exercises. Moreover, UN officials and member state representatives told us that a disagreement over a 6-month spending cap served to unify the G-77 countries and weaken the cohesion between developed countries. According to UN and member state representatives, the budget cap initially served to focus attention on the need to make progress on the reform initiatives. However, according to member state representatives, the spending cap made it more difficult to reach consensus on management reforms. On June 30, 2006, the General Assembly decided to lift the spending cap. According to UN officials and member state representatives, now that the cap is lifted, implementation of the reforms can continue, but questions remain about the pace and priorities for implementation of the reforms. Disagreement between the G-77 countries and the developed countries over the details of implementing the initiatives could continue to affect their progress. Member states disagree on some of the specifics of the reforms in areas such as the review of programs and activities and the details for creating the Human Rights Council, as discussed earlier, as well as the role of the Deputy Secretary-General. For example, two independent studies recommended the creation of a chief operating officer position and the Secretary-General’s March 2006 report recommended that the Deputy Secretary-General assume formal authority and accountability for the management and overall direction of the Secretariat’s operations. However, the spokesperson for the G-77 countries has stated in the Fifth Committee and in the General Assembly that, according to the UN charter, the Secretary-General is the UN’s Chief Administrative Officer and thus responsible for the organization’s management. In May 2006, the General Assembly passed a resolution that noted that the function of the post of Deputy Secretary-General should not diminish the role or responsibilities of the Secretary-General. The resolution further noted that the overall responsibility for management of the Organization rests with the Secretary-General. Therefore, it will be up to the discretion of the next Secretary-General to decide on the delegation of authority to his/her deputy. (App. II provides more information on the disagreements specific to each reform.) For many of the management reform proposals, the UN has not developed comprehensive implementation plans with associated time frames, cost estimates, and potential savings. Setting an implementation time line is a key practice for organizations undergoing change. However, many UN proposals we reviewed that are related to management reform do not include specific time frames. For example, although a senior U.S. official said that the July 2006 resolution is a positive step toward implementation of certain reforms, he noted that the section on oversight does not provide concrete actions. In addition, the resolution does not include specific time frames for implementing a fully operational ethics office or the Independent Audit Advisory Committee. Without establishing deadlines, it is difficult to hold managers accountable for completing reform efforts. Moreover, without comprehensive implementation plans, the total budgetary implications of the reform efforts are not clear. The UN has not developed or refined cost estimates for many of the initiatives, including improving certain field staff benefits and conditions to mirror those of headquarters staff; increasing investments in human resource development; introducing a new information communications technology system; and approving a staff buyout program. However, the Secretary-General has developed preliminary cost estimates for three key initiatives that alone could cost over $500 million—the proposed new information communications technology system ($120 million over several years), a one-time staff buyout ($50 to $100 million), and efforts to improve field staff benefits ($280 million annually). Moreover, the UN Secretariat said that these estimates will require further assessments before reliable estimates and a plan of action can be determined. Without determining cost estimates, it is difficult to ensure that financing will be available when needed. Likewise, the UN has not yet developed savings estimates because certain initiatives will require further assessment and then approval by the General Assembly. The Secretary-General anticipates that the costs for the reforms could be offset by savings from efforts such as relocation and outsourcing and the long-term benefits of a more efficiently run organization. However, the UN has not yet produced any concrete savings estimates, and efforts to produce savings have faced significant challenges. For instance, the Secretary-General said that the cost could be partially offset by savings in procurement reform. However, UN officials said that the UN Secretariat has not yet developed firm procurement savings estimates. In addition, proposals to streamline the way in which the organization delivers its services, which may result in savings, have experienced resistance from member states and staff members. In May 2006, the G-77 did not authorize the Secretary-General to conduct a cost- benefit analysis of his proposal to relocate translation, editing, and document production services. Public documents do not specify the G-77’s reason for not allowing a cost-benefit analysis to be undertaken. Further, the outsourcing of internal printing and publishing processes could generate savings but, according to UN officials, it could also face challenges from member states and staff to implement. To develop or refine cost and savings estimates, the Secretariat is conducting cost-benefit analyses and assessments in areas such as the proposed new information communications technology system, outsourcing and relocation, staff buyout, and public access to UN information. Appendix IV provides information on the reviews, assessments, and cost-benefit analyses that the Secretariat is preparing, including their expected time frames for completion to the extent stated by the UN. However, some of the cost-benefit analyses and assessments will not be available until March 2007 for member states to consider, and these will have a bearing on the overall reform package and, ultimately, the total cost of the reform. To date, the additional cost to member states to implement certain management reform initiatives has been about $40 million, which primarily reflects start-up costs for efforts such as the adoption of the International Public Sector Accounting Standards, the new ethics office, additional costs for the new Human Rights Council, and an increase to the working capital fund (see table 2). Therefore, based on the slow pace of the reform process and the time frames for completion of the assessments and cost-benefit analyses, the total budgetary implications of the reform effort, including the U.S. government’s share, remain unclear. Administrative guidance, such as staff regulations and rules that implement General Assembly resolutions, could complicate and sometimes restrict the process of implementing certain human resource initiatives. According to the Secretary-General, the existing human resources management framework was designed for a stable, largely headquarters-based environment, and currently more than half of the UN’s 30,000 staff members are serving in the field. The Secretary-General also said that the Secretariat’s increasingly complex mandates require a new skills profile that will enable it to respond in an integrated way to new needs in diverse areas such as peacekeeping and humanitarian assistance. In addition, salaries and other human resource costs comprise almost 80 percent of the UN regular budget. As such, UN officials state that it would be impossible to achieve meaningful management reform without reforming human resources. The Secretary-General has proposed several human resource reforms, such as a staff buyout, replacing permanent contracts with open-ended appointments, better integration of staff worldwide, and outsourcing. However, administrative guidance may complicate the process of implementing some initiatives, such as: In September 2005, member states agreed to consider a proposal from the Secretary-General for a one-time staff buyout. According to the Secretary- General, to target staff for buyout the UN Secretariat must analyze and determine the skills needed in the organization, taking into account proposed reform efforts such as relocation of work, outsourcing, and mandate review. Staff performing administrative functions that are targeted for outsourcing may be offered a buyout if their skills are no longer needed by the UN. The Secretary-General must also conduct consultations with UN staff representatives. UN officials said that it may be difficult for the Secretary-General, staff representatives, and member states to agree on the skills required to realign staff with the UN’s priorities. In addition, some of the cost-benefit analyses for the relocation and outsourcing initiatives will not be completed until March 2007. The UN Secretariat is developing a more integrated approach for staff to serve worldwide. However, UN officials said that staff may find ways to resist efforts to be transferred, especially if a transfer would result in leaving UN Headquarters or other desirable duty stations. According to the Secretary-General, staff are not sufficiently mobile, and their movement is hampered by multiple and restrictive mandates. The Secretary-General proposed the integration of field and headquarters staff into one global Secretariat with competitive conditions of service. This would include changing the staff rules to create one staff contract to mirror that of headquarters staff. Based on a study prepared in January 2006 by the International Civil Service Commission for the General Assembly, this proposed integration raises a number of complicated policy questions that will need to be addressed, including long-term contractual obligations, cost implications related to differences in the compensation packages, distortion of geographical distribution and gender balance, and complications for merit-based, transparent, and open selection procedures. Further, according to UN officials, proposals to reconsider a change in the way the UN delivers its services by relocating and outsourcing certain headquarters functions may meet with resistance from some member states and staff as jobs may be lost. According to the Secretary-General, the General Assembly established a number of conditions for outsourcing that severely restrict the circumstances under which it can be contemplated. One of those restrictions includes avoiding possible negative impact on staff. Thus, restrictive conditions such as these could complicate the process of implementing certain human resource initiatives. During the past few years, the inadequate oversight of the Oil for Food program and mismanagement of UN procurement activities have demonstrated the urgent need for UN management reform. Several independent reports in 2005 found that inefficient UN management operations persist and discuss the immediate need for management reform given the growth in complexity and significance of UN worldwide operations within the past decade. Despite several past reform efforts, long-standing concerns about weak UN management functions remain. As the largest financial contributor to the UN, the United States has taken a leadership role in calling for improved management processes. In addition, the United States, through the Department of State and the U.S. Mission to the United Nations, continues to take measures to advance reform of UN management processes. However, progress in management reform efforts has been slow. Proposals awaiting review cannot progress until the General Assembly approves them through a process that traditionally requires agreement by all 192 UN member states, and consensus building can be a difficult and lengthy process. Moreover, the UN has not agreed upon implementation plans for each reform effort that include established time frames and cost estimates—practices that increase the transparency and accountability of the reform process. The Secretary-General’s proposal for the adoption of International Public Sector Accounting Standards is a step toward increased transparency and accountability because it includes a detailed timetable for implementation. Until the UN undergoes successful management reform, its ability to respond effectively and efficiently to increasingly complex international crises is diminished. We recommend that the Secretary of State and the U.S. Permanent Representative to the UN work with other member states to encourage the General Assembly and the Secretary-General to include cost estimates and expected time frames for implementation and completion for each reform as it is approved. We also recommend that the Secretary of State’s annual U.S. Participation in the United Nations report to the Congress include a section on the status and progress of the major UN management reforms. The Department of State provided written comments on a draft of this report (see app. V). The Department of State agreed with our recommendations and stated that it will continue to work toward creating a more effective and accountable United Nations. In particular, it noted that it has seen too little in terms of results since the September 2005 Summit. Moreover, the Department of State also said that the Secretariat should be held accountable for implementing these reforms and will continue to work with other member states toward ensuring that a transparent reporting mechanism to the General Assembly is established. The Department of State also concurred fully with the need to keep the U.S. Congress informed of these management reform initiatives and will continue to monitor and inform the Congress as recommended. The UN did not provide written comments. In addition, the Department of State and the United Nations provided technical comments on our draft report, which were incorporated into the text where appropriate. We are sending copies of this report to interested members of the Congress, the Secretary of State, and the U.S. Permanent Representative to the UN. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9601 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. To identify and track management reforms, we reviewed key documents proposing United Nations (UN) management and human rights reforms and interviewed key officials. We obtained and reviewed official reports of the Secretariat and the Office of Internal Oversight Services (OIOS), Advisory Committee on Administrative and Budgetary Questions (ACABQ) documents, General Assembly resolutions, Secretary-General bulletins, Web sites, related budget documents, and statements from UN officials. We interviewed senior officials from UN departments in New York City. Specifically, we met with officials from the General Assembly Office of the President, the Office of the Deputy Secretary-General, the Departments of Management and Policy and Planning, ACABQ, the Office of Program Planning and Budget (OPPBA), and OIOS. During the course of our review, we also discussed the status of UN reforms with Department of State officials in Washington, D.C., and New York City. We selected reforms in the areas of management operations of the Secretariat, oversight, ethical conduct, review of programs and activities, and human rights to track in more detail. We determined that these were key areas of management reform through our review of UN documents and in our discussions with UN and U.S. officials. We focused our work on management reforms that began in 2005 and did not specifically address the 1997 and 2002 reform agendas. The 2005 reforms applied to the Secretariat and the UN’s governing bodies, including the General Assembly, the Economic and Social Council, and the Security Council. We did not include UN specialized agencies or funds and programs in our review. Other reform efforts such as the UN Peace Building Commission, Security Council reform, and governance were beyond the scope of this review. To determine the factors facing the implementation of UN reforms, we reviewed reports and documentation of the Secretariat, General Assembly, OIOS, Joint Inspection Unit, and International Civil Service Commission. In addition, we spoke with UN officials in New York. These included officials from the Office of the Deputy Secretary-General, the Department of Management, ACABQ, OPPBA, and OIOS. We also met with representatives from several member states and spoke with U.S. officials in Washington, D.C., and New York. We also interviewed outside observers of the UN system, including nongovernmental organizations and members of academia. Many cost estimates for the proposed reform initiatives are preliminary, and detailed cost estimates are being developed; therefore, we did not analyze the assumptions underlying these estimates to determine whether they are reasonable and reliable. To determine the reliability of data in the UN’s inventory of about 9,000 programs and activities (mandates) that are older than 5 years, we interviewed UN officials and performed some basic cross checks. The scope of the mandate review covers mandates of the General Assembly, the Economic and Social Council, and the Security Council that are older than 5 years and are active or potentially active. According to the Secretary-General, the resolutions adopted from year to year by each of the principal organs are the primary source of mandates. The Secretary-General also said that mandates are not easily defined or quantifiable, and a concrete legal definition of a mandate does not exist. In addition, the UN updates its inventory of mandates on a regular basis. We performed our analysis as of September 2006. We determined that the data were sufficiently reliable for the purposes of establishing the approximate number of mandates and comparing the approximate number of mandates that have and have not been renewed in the last 5 years. Further, we believe that the cost estimates and the associated funds that the General Assembly appropriated to date for reform efforts are sufficiently reliable for the purposes of this report. We performed our work between January and September 2006 in accordance with generally accepted U.S. government auditing standards. We identified and tracked the status of management reform initiatives in five key areas—management of the Secretariat, oversight, ethical conduct, review of programs and activities, and human rights—and identified disagreements among member states that may affect their implementation. Table 3 provides information on the status of major United Nations (UN) management reform initiatives, actions that are still pending, and points of disagreement. All dates are in 2006 unless otherwise indicated. The United Nations (UN) ethics office is implementing the UN’s new whistleblower protection policy, which took effect in January 2006. The policy protects UN staff from retaliation for reporting misconduct of any other staff. Retaliation, as defined in the policy, includes any detrimental action recommended, threatened, or taken because an individual reported misconduct or cooperated with an authorized audit or investigation. The policy shifts the burden of proof for retaliation to the UN organization and away from individuals, requiring the organization to prove in each case that the alleged retaliatory action is unrelated to the report of misconduct. According to the policy, the ethics office is responsible for receiving complaints about threatened or actual acts of retaliation against staff and keeping confidential records of all complaints received. The office is also responsible for conducting a preliminary review of the complaint to determine if the complainant engaged in an activity protected by the whistleblower protection policy, and there is sufficient evidence that the protected activity was a contributing factor in causing the alleged retaliation or threat of retaliation. In order for an individual to receive protection under the whistleblower protection policy, the report of misconduct should be made as soon as possible and no more than 6 years after the individual becomes aware of the misconduct. The individual reporting misconduct must submit information or evidence to support a reasonable belief that misconduct has occurred. UN staff may make reports of misconduct through established internal mechanisms including Office of Internal Oversight Services (OIOS), the Assistant Secretary-General for Human Resources Management, and the head of the department or office concerned. The whistleblower protection policy also protects staff who report misconduct to external mechanisms, such as the media or outside organizations, provided that all internal mechanisms have been exhausted. The UN is the first intergovernmental organization to provide such protection. Staff who believe that retaliatory action has been taken against them because they have reported misconduct or cooperated with an authorized audit or investigation are directed to forward all information and documentation to support their complaint to the ethics office. The whistleblower protection policy states that such complaints can be made in person, by regular mail, e-mail, fax, or through the ethics office helpline. Once the ethics office receives a complaint, it conducts a preliminary review, which should be completed within 45 days. According to staff in the ethics office, they try to complete their reviews within that time frame, but, in some cases, they need more time to speak to everyone involved in the case. In reviewing a case, the ethics office reviews the evidence presented by the complainant and interviews the individual being accused and any other witnesses of the alleged retaliation. If the ethics office finds that there is a credible case of retaliation or threat of retaliation, it refers the matter in writing to OIOS for investigation and immediately notifies the complainant, in writing, that his or her case has been referred. According to the whistleblower protection policy, OIOS seeks to complete its investigation and write a report within 120 days. The report is submitted to the ethics office. Once the ethics office receives the investigation report, it informs the complainant, in writing, of the outcome of the investigation and makes recommendations on the case to the head of the department or office concerned and to the Under-Secretary-General for management. The ethics office may recommend that disciplinary actions be taken against the retaliator. It may also recommend that measures be taken to correct the negative consequences suffered by the complainant as a result of the retaliatory action, including reinstatement or transfer to another office or function for which the individual is qualified. If the ethics office is not satisfied with the response from the head of the department or office concerned, it can make a recommendation directly to the Secretary- General, who then provides a written response to the ethics office and the head of the office concerned. The whistleblower protection policy states that retaliation against an individual for reporting misconduct is itself misconduct and will lead to disciplinary action. The United Nations (UN) Secretariat is currently conducting a number of assessments, cost-benefit analyses, and comprehensive reports. We identified a number of key studies that include a detailed cost study for the proposed new information communications technology system, assessments of a staff buyout, and cost-benefit analyses of outsourcing internal printing and publishing processes, and the relocation of information technology support services. Some of these assessments will not be available for member states to consider until early 2007. In addition, the projected completion dates represent the dates when the UN Secretariat is expected to complete the reports and forward them to the legislative bodies for review. It is not clear when the General Assembly will review and make a decision on these initiatives. Table 4 lists the key assessments, cost-benefit analyses, and comprehensive reports that the UN Secretariat is currently conducting. The UN Secretariat did not provide us with detailed information, such as status and projected completion date for each initiative. In addition to the individual named above, Phillip Thomas, Assistant Director; Jeanette Espinola, Stephanie Robinson, and Barbara Shields made key contributions to this report. Debbie J. Chung, Martin De Alteriis, Etana Finkler, and Grace Lui provided technical assistance. United Nations: Weaknesses in Internal Oversight and Procurement Could Affect the Effective Implementation of the Planned Renovation. GAO-06-877T. Washington, D.C.: June 20, 2006. United Nations: Oil for Food Program Provides Lessons for Future Sanctions and Ongoing Reform, GAO-06-711T. Washington, D.C.: May 2, 2006. United Nations: Internal Oversight and Procurement Controls and Processes Need Strengthening. GAO-06-710T. Washington, D.C.: April 27, 2006. United Nations: Funding Arrangements Impede Independence of Internal Auditors. GAO-06-575. Washington, D.C.: April 25, 2006. United Nations: Lessons from Oil for Food Program Indicate Need to Strengthen Internal Controls and Oversight, GAO-06-330. Washington, D.C.: Apr. 25, 2006. United Nations: Procurement Internal Controls Are Weak, GAO-06-577. Washington, D.C., April 25, 2006. Peacekeeping: Cost Comparison of Actual UN and Hypothetical U.S. Operations in Haiti. GAO-06-331. Washington, D.C.: February 21, 2006. United Nations: Preliminary Observations on Internal Oversight and Procurement Practices, GAO-06-226T. Washington, D.C.: October 31, 2005. United Nations: Sustained Oversight Is Needed for Reforms to Achieve Lasting Results, GAO-05-392T. Washington, D.C.: March 2, 2005. United Nations: Oil for Food Program Audits, GAO-05-346T. Washington, D.C.: February 15, 2005. United Nations: Observations on the Oil for Food Program and Areas for Further Investigation. GAO-04-953T. Washington, D.C.: July 8, 2004. United Nations: Observations on the Oil for Food Program and Iraq’s Food Security. GAO-04-880T. Washington, D.C.: June 16, 2004. United Nations: Observations on the Management and Oversight of the Oil for Food Program. GAO-04-730T. Washington, D.C.: April 28, 2004. United Nations: Observations on the Oil for Food Program. GAO-04- 651T. Washington, D.C.: April 7, 2004. Recovering Iraq’s Assets: Preliminary Observations on U.S. Efforts and Challenges. GAO-04-579T. Washington, D.C.: March 18, 2004. United Nations: Reforms Progressing, but Comprehensive Assessments Needed to Measure Impact, GAO-04-339. Washington, D.C.: February 13, 2004. Weapons of Mass Destruction: U.N. Confronts Significant Challenges in Implementing Sanctions against Iraq, GAO-02-625. Washington, D.C.: May 23, 2002. United Nations: Reform Initiatives Have Strengthened Operations, but Overall Objectives Have Not Yet Been Achieved, GAO/NSIAD-00-150. Washington, D.C.: May 10, 2000. United Nations: Progress of Procurement Reforms. GAO/NSIAD-99-71. Washington, D.C.: April 15, 1999. United Nations: Status of Internal Oversight Services, GAO/NSIAD-98-9. Washington, D.C.: November 19, 1997.
Despite various reform efforts, significant inefficiencies in United Nations (UN) management operations persist. In September 2005, heads of UN member states approved a resolution that called for a series of reforms to strengthen the organization. As the largest financial contributor to the UN, the United States has a strong interest in the progress of UN reform initiatives. GAO was asked to (1) identify and track the status of UN management reforms in five key areas and (2) identify factors that may affect the implementation of these reform initiatives. To address these objectives, GAO reviewed documents proposing UN management reform and interviewed U.S. and UN officials. Most of the UN management reforms in the five areas GAO examined--management operations of the Secretariat, oversight, ethical conduct, review of programs and activities, and human rights--are either awaiting General Assembly review or have been recently approved. In addition, many proposed or approved reforms do not have an implementation plan that establishes time frames and cost estimates. First, in July 2006, the General Assembly approved proposals to improve the management operations of the Secretariat, such as upgrading information technology systems and giving the Secretary-General some flexibility in spending authority. In addition, in fall 2006, the General Assembly will review other proposals, such as procurement and human resource reforms. Second, implementation of proposals to improve the UN's oversight capabilities, such as strengthening the capacity of the Office of Internal Oversight Services and establishing the Independent Audit Advisory Committee, are pending General Assembly review in fall 2006. Third, the UN established an ethics office with temporary staff in January 2006 that has developed an internal timetable for implementing key initiatives. However, it is too early to determine whether the office will be able to fully carry out its mandate. Fourth, UN member states agreed to complete a review of UN programs and activities in 2006, but progress has been slow and the results and time line for completion remain uncertain. Fifth, the General Assembly created a new Human Rights Council in April 2006, but significant concerns remain about the council's structure. GAO identified several factors that may affect the UN's ability to fully implement management reforms. First, although all UN member states agree that UN management reforms are needed, disagreements about the overall implications of the reforms could significantly affect their progress. Most member states are concerned that some of the reforms could increase the authority of the Secretariat at the expense of the General Assembly, thus decreasing their influence over UN operations. Member states also disagree on some of the specifics of the reforms in areas such as the review of programs and activities and the role of the Deputy Secretary-General. Second, the general absence of an implementation plan for each reform that establishes time frames and cost estimates could affect the UN's ability to implement the reform initiatives. Without establishing deadlines or determining cost estimates, it is difficult to hold managers accountable for completing reform efforts and ensure that financing will be available when needed. Third, administrative guidance, such as staff regulations and rules that implement General Assembly resolutions, could complicate the process of implementing certain human resource reform proposals. For example, according to the Secretary-General, the General Assembly established a number of conditions for outsourcing that severely restrict the circumstances under which it can be contemplated.
Mr. Chairman and Members of the Subcommittee: We are pleased to be here today to participate in the Subcommittee’s oversight hearing on the U.S. Postal Service. My testimony will (1) focus on the performance of the Postal Service and the need for improving internal controls and protecting revenue in an organization that takes in and spends billions of dollars each year and (2) highlight some of the key reform and oversight issues that continue to challenge the Postal Service and Congress as they consider how U.S. mail service will be provided in the future. I will also provide some observations from our ongoing work relating to labor-management relations at the Postal Service and other areas. My testimony is based on our ongoing work and work that we completed over the past year. First, I would like to discuss both the reported successes and some of the remaining areas of concern related to the Postal Service’s performance. Last year, the Postal Service reported that it had achieved outstanding financial and operational performance. Financially, the Postal Service had the second most profitable year in its history. According to the Postal Service’s 1996 annual report, its fiscal year 1996 net income was $1.6 billion. Similarly, with regard to mail delivery service, the Postal Service continued to meet or exceed its goals for on-time delivery of overnight mail. Most recently, the Postmaster General announced that, during 1996, the Postal Service delivered 91 percent of overnight mail on time or better. Additionally, during fiscal year 1996, the Postal Service’s volume exceeded 182 billion pieces of mail and generated more than $56 billion in revenue. While these results are encouraging, other performance data suggest that some areas of concern warrant closer scrutiny. For example, last year’s delivery of 2-day and 3-day mail—at 80 and 83 percent respectively—did not score as high as overnight delivery. Such performance has raised a concern among some customers that the Postal Service’s emphasis on overnight delivery is at the expense of 2-day and 3-day mail. Additionally, although its mail volume continues to grow, the Postal Service is concerned that customers increasingly are turning to its competitors or alternative communications methods. In 1996, mail volume increased by about one-half of anticipated increase in volume. showed that its 1996 operating expenses increased 4.7 percent compared to a 3.9 percent increase in operating revenues. Labor costs, which include pay and benefits, continued to account for almost 80 percent of the Postal Service’s operating expenses, and the Postal Service expects that its costs for compensation and benefits will grow more than 6 percent in 1997. Moreover, controlling costs will be critical with regard to capital investments in 1997, as the Postal Service plans to commit $6 billion to capital improvements. Over the next 5 years, the Service plans to devote more than $14 billion in capital investments to technology, infrastructure improvements, and customer service and revenue initiatives. The Postal Service’s continued success in both operational and financial performance will depend heavily on its ability to control operating costs, strengthen internal controls, and ensure the integrity of its services. However, we found several weaknesses in the Postal Service’s internal controls that contributed to unnecessary increased costs. We reported in October 1996 that internal controls over Express Mail Corporate Accounts (EMCA) were weak or nonexistent, which resulted in the potential for abuse and increasing revenue losses over the past 3 fiscal years. Specifically, we found that some mailers obtained express mail services using invalid EMCAs and that the Postal Service did not collect the postage due. Consequently, in fiscal year 1995, the Postal Service lost express mail revenue of about $800,000 primarily because it had not verified EMCAs that were later determined to be invalid. Since our report was issued, the Postal Service has developed plans to address these deficiencies. The Postal Service is revising its regulations to require an initial deposit of $250, up from $100, to open an EMCA. It also plans to issue a memorandum requiring that district managers ensure that employees perform the necessary express mail acceptance checks so that the correct postage amounts can be collected. Finally, the Postal Service plans to install terminals in mail processing plants to allow Express Mail packages that are deposited in collection boxes or picked up at customers’ locations to be checked for valid EMCA numbers before they are accepted into the mail system. Similarly, we reported in June 1996 that weaknesses in the Postal Service’s controls for accepting bulk business mail prevent it from having reasonable assurance that all significant amounts of postage revenue due are received when mailers claim presort/barcode discounts. We reported that during fiscal year 1994, as much as 40 percent of required bulk mail verifications were not performed. Bulk mail totalled almost one-half of the Postal Service’s total revenue of $47.7 billion in fiscal year 1994. At the same time, we found that less than 50 percent of the required follow-up verifications to determine the accuracy of the clerk’s work were being performed by the supervisors. In response to our recommendations, the Postal Service is developing new and strengthening existing internal controls to help prevent revenue losses in bulk mailings. For example, the Postal Service plans to improve the processes used in verification of mail, including how units are staffed, how verifications are performed, and how results of acceptance work are reported and reviewed. To avoid additional unwarranted costs, the Postal Service also needs to better ensure the overall integrity of its acquisitions and services. We concluded, in our January 1996 report, that the Postal Service did not follow required procedures for seven real estate or equipment purchases. We estimated that these seven purchases resulted in the Postal Service’s expending about $89 million on penalties, unusable, or marginally usable property. Three of the seven purchases involved ethics violations arising from the contracting officers’ failure to correct situations in which individuals had financial relationships with the Postal Service and with certain offerors. We also pointed out that the Office of Government Ethics was reviewing the Postal Service’s ethics program and reported that all areas of the program required improvement. The Office of Government Ethics subsequently made a number of recommendations designed to ensure that improvement of the Postal Service’s ethics program continues through more consistent oversight and management support. of Justice. As a result of these actions, the Office of Government Ethics closed its remaining open recommendations. Additionally, strengthening program oversight is essential to effective mail delivery. We found that the Postal Service did not exercise adequate oversight of its National Change of Address (NCOA) program. We reported that the Postal Service took a positive step toward dealing with the inefficiencies of processing misaddressed mail. However, at the same time, we found that the NCOA program was operating without clear procedures and sufficient oversight to ensure that the program was operating in compliance with the privacy provisions of federal laws. Accordingly, we recommended that the Postal Service strengthen oversight of NCOA by developing and implementing written oversight procedures. In response to our recommendation, the Postal Service developed written oversight procedures for the NCOA program. Most recently, we issued a report that describes how the Postal Service closes post offices and provides information on the number closed since 1970—over 3,900 post offices. We also provided information on the number of appeals and their dispositions, as well as some information about the communities where post offices were closed in fiscal years 1995 and 1996. Generally, the Postal Service initiated the closing process after a postmaster vacancy occurred through retirement, transfer or promotion, or after the termination of the post office building’s lease. In each case, the Postal Service proposed less costly alternative postal services to the affected community, such as establishing a community post office operated by a contractor or providing postal deliveries through rural routes and cluster boxes. will bear a uniform rate. In our September 1996 report, we emphasized the importance of recognizing the Statute’s underlying purpose and determining how changes may affect universal mail service and uniform rates. Most important among the potential consequences is that relaxing the Statutes could open First-Class mail services to additional competition, thus possibly affecting postal revenues and rates and the Postal Service’s ability to carry out its public service mandates. However, at the same time, the American public could benefit through improved service. It will be important to take into account the possible consequences for all stakeholders in deciding how mail services will be provided to the American public in the future. Another key reform issue is the future role of the Postal Service in the constantly changing and increasingly competitive communications market. For example, the use of alternative communications methods such as electronic mail, faxes, and the Internet continues to grow at phenomenal rates in the United States and is beginning to affect the Postal Service markets. At the same time, the Postal Service’s competitors continue to challenge it for major shares of the communications market. According to the Postmaster General, the Postal Service has been losing market share in five of its six product lines. It seems reasonable to assume that these alternative communications methods are likely to be used more and more. In addition, international mail has become an increasingly vital market in which the Postal Service competes. In our March 1996 report, we pointed out that, although the Postal Service has more flexibility in setting international rates, it still lost business to competitors because rates were not competitive and delivery service was not reliable. We also identified several issues surrounding the Postal Service’s role in the international mail arena that remain unresolved. Chief among them is the appropriateness of the Postal Service’s pricing practices in setting rates for international mail services. frequency of mail delivery to some businesses, as well as in urban and rural areas. CPC uses a regulatory rate-making process that includes the opportunity for public comment and government approval for basic domestic and international single-piece letters. However, postage rates for other mail services can be approved by CPC without issuing regulations or obtaining government approval. Some of the key concerns that have been raised by CPC customers include CPC’s closure of rural post offices and its conversion of others to private ownership. In addition, CPC’s competitors have expressed concern about whether CPC is cross-subsidizing the prices of its courier services with monopoly revenues. The Canadian government has responded to these concerns by continuing its moratorium on post office closings and directing CPC to discontinue delivery of unaddressed advertising mail. The government is also considering a call for additional government oversight of CPC. Mr. Chairman, as you are aware, we also have a number of ongoing reviews related to postal reform. For example, in concert with your focus on the future role of the Postal Service, we are currently reviewing the role and structure of the Postal Service’s Board of Governors in order to determine its strengths and weaknesses. The Board of Governors is responsible for directing and controlling the expenditures of the Postal Service, reviewing its practices, participating in long-range planning, and setting policies on all postal matters. In addition to obtaining the views of current and former Board members, we will provide information on the role and structure of Boards in other types of government-created organizations. Another issue important to postal reform that we are reviewing involves access to mailboxes. More specifically, we plan to provide information on (1) public opinions on the issue of mailbox restrictions; (2) views of the Postal Service and other major stakeholders; and (3) this country’s experience with mailbox security and enforcement of related laws, compared with the experiences in selected other countries. despite the initiatives that have been established to address them. For example, the number of grievances requiring formal arbitration has increased almost 76 percent, from about 51,000 in fiscal year 1993 to over 90,000 in fiscal year 1996. These difficulties continue to plague the Service primarily because the major postal stakeholders (the Postal Service, four major unions, and three management associations) cannot agree on common approaches for addressing their problems. We continue to believe that until the major postal stakeholders develop a framework agreement that would outline common objectives and strategies, efforts to improve labor-management relations will likely continue to be fragmented and difficult to sustain. The Government Performance and Results Act (GPRA) provides a mechanism that may be useful in focusing a dialogue that could lead to a framework agreement. GPRA provides a legislatively based mechanism for the major stakeholders, including Congress, to jointly engage in discussions that focus on an agency’s mission and on establishing goals, measuring performance, and reporting on mission-related accomplishments. GPRA can be instrumental to the Postal Service’s efforts to better define its current and future role. As results-oriented goals are established to achieve that role, the related discussions can also provide a foundation for a framework agreement. Successful labor-management relations will be critical to achieving the Postal Service’s goals. The Postal Service and Congress will need results-oriented goals and sound performance information to most effectively address some of the policy issues that surround the Postal Service’s performance in a dynamic communications market. Recognizing that the changes envisioned by GPRA do not come quickly or easily, sustained oversight by the Postal Service and Congress will be necessary. the residential and business levels will continue to be a critical area as the Postal Service strives to improve customer service in order to remain competitive. The Postal Service has made considerable progress in improving its financial and operational performance. Sustaining this progress will be dependent upon ensuring that the key issues we identified, such as controlling costs, protecting revenues, and clarifying the role of the Postal Service in an increasingly competitive communications market, are effectively addressed by the Postal Service and Congress. Mr. Chairman, this concludes my prepared statement. I have attached a list of our Postal Service products issued since January 1996. I would be pleased to respond to any questions you or members of the Subcommittee may have. U.S. Postal Service: Information on Post Office Closures, Appeals, and Affected Communities (GAO/GGD-97-38BR, Mar. 11, 1997). Postal Reform in Canada: Canada Post Corporation’s Universal Service and Ratemaking (GAO/GGD-97-45BR, Mar. 5, 1997). U.S. Postal Service: Revenue Losses From Express Mail Accounts Have Grown (GAO/GGD-97-3, Oct. 24, 1996). Postal Service: Controls Over Postage Meters (GAO/GGD-96-194R, Sept. 26, 1996). Inspector General: Comparison of Certain Activities of the Postal IG and Other IGs (GAO/AIMD-96-150, Sept. 20, 1996). Postal Service Reform: Issues Relevant to Changing Restrictions on Private Letter Delivery (GAO/GGD-96-129A/B, Sept. 12, 1996). U.S. Postal Service: Improved Oversight Needed to Protect Privacy of Address Changes (GAO/GGD-96-119, Aug. 13, 1996). U.S. Postal Service: Stronger Mail Acceptance Controls Could Help Prevent Revenue Losses (GAO/GGD-96-126, June 25, 1996). U.S. Postal Service: Unresolved Issues in the International Mail Market (GAO/GGD-96-51, Mar. 11, 1996). Postal Service: Conditions Leading to Problems in Some Major Purchases (GAO/GGD-96-59, Jan. 18, 1996). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the challenges that confront the Postal Service and Congress as they consider how to sustain the Postal Service's performance and maintain a competitive role in providing mail service to the American public in the future. GAO noted that: (1) the Postal Service reported that fiscal year (FY) 1996 represented the second year in a row that its financial performance was profitable and operational performance improved; (2) the Postal Service's 1996 net income was $1.6 billion and it delivered 91 percent of overnight mail on time; (3) additionally, for FY 1996, the Postal Service's volume exceeded 182 billion pieces of mail and generated more than $56 billion in revenue; (4) while these results are encouraging, other performance data suggest that some areas warrant closer scrutiny; (5) last year's delivery of 2-day and 3-day mail, at 80 and 83 percent respectively, did not score as high as overnight delivery; (6) the concern among customers is that the Postal Service's emphasis on overnight delivery is at the expense of 2-day and 3-day mail; (7) additionally, although its mail volume continues to grow, the Postal Service is concerned that customers increasingly are turning to its competitors or alternative communications methods; (8) in 1996, mail volume increased by about one-half of anticipated increase in volume; (9) containing costs is another key challenge that GAO has reported on previously; (10) GAO has also found several weaknesses in the Postal Service's internal controls that contributed to increased costs; (11) the Postal Service's continued success in both financial and operational performance will depend heavily on controlling operating costs, strengthening internal controls, and ensuring the integrity of its services; (12) the prospect that pending postal legislation may place the Postal Service in a more competitive arena with its private sector counterparts has prompted congressional consideration of some key reform issues; (13) these issues include how proposed changes to the Private Express Statutes may affect universal mail service, postal revenues, and rates; (14) another reform issue is the future role of the Postal Service in an increasingly competitive, constantly changing communications market; (15) congressional oversight remains a key tool for improving the organizational performance of the Postal Service; (16) one of the most important areas for oversight is labor-management relations; (17) despite the initiatives that have been established to address them, the long-standing labor-management relations problems GAO identified in 1994 remain unresolved; and (18) also, the Postal Service's automation efforts will continue to require the attention of both the Postal Service and Congress to ensure that increased productivity and an adequate return on investments are realized.
Enactment of the TANF block grant significantly changed federal welfare policy and gave states more flexibility in designing their welfare programs. For example, states have flexibility in setting benefit levels, eligibility requirements, work requirements, and policies for sanctioning noncompliant recipients (that is, reducing or discontinuing their benefits). Due to this flexibility, TANF programs differ substantially from state to state. These different state policies can affect the extent to which a state’s TANF recipients participate in work activities and the type of work activities they engage in. States also have flexibility in using TANF block grant funds and in using state funds—referred to as maintenance-of-effort (MOE) funds—that states were required to use toward TANF purposes in order to qualify for the block grant. For example, if states want to exclusively use MOE funds for a particular group of welfare recipients, such as those in two-parent families, they can use these funds through separate state programs (SSPs) for those recipients and remove them from the TANF requirements. Due to the importance of state flexibility under TANF, PRWORA limited HHS’s authority to regulate state TANF programs. PRWORA also substantially reduced HHS staff available to implement TANF. However, PRWORA established penalties for states, such as for not meeting required levels of work participation, and HHS has authority to regulate in situations where penalties are involved. TANF has two work participation rates—one that applies to all adult-headed families and another that applies to two-parent families. A certain percentage of each state’s adult- headed TANF cases receiving cash assistance must participate in work- related activities for a minimum number of hours each week or the state may face financial penalty. The categories of work activities that can be counted for the purpose of the performance measure are outlined in TANF law and regulations. If TANF recipients engage in other activities provided or permitted under the state’s TANF program, then those activities do not count toward meeting the federal work participation requirements. Further, if TANF recipients engage in work activities for less than the minimum required number of hours, then those recipients do not count as being engaged in work for purposes of the performance measure. When a state does not meet its required level of work participation, HHS will send the state a penalty notice. The state then has the opportunity to avoid a penalty by providing reasonable cause why it did not meet the work participation rate or by submitting a corrective compliance plan that will correct the violation and ensure continued compliance with work participation requirements. Since implementation of TANF, numerous states have received penalty notices from HHS for not meeting the required level of work participation. However, most of these states have avoided penalties by submitting corrective compliance plans. As of February 2005, 11 states and the District of Columbia had paid penalties for not meeting the two-parent work participation rate. Most of these penalties were for the first 4 years (fiscal years 1997-2000) of TANF implementation, and 5 states and the District of Columbia have paid penalties for more than 1 year. Each quarter, states are required to report to ACF monthly data on their TANF cases, including the number of hours each adult recipient spent in activities that count toward meeting federal work requirements. States have the option of reporting to ACF on all their TANF cases (the universe) or on a scientifically drawn sample of TANF cases. Using the data reported by states, ACF calculates an annual work participation rate for each state. A state’s annual work participation rate is based on the state’s average monthly rate for the year. See appendix I for information on elements of the work participation requirement and how the work participation rate is calculated by ACF. The TANF legislation and regulations outline 12 categories of work activities that can count toward the federal work requirement. The TANF regulations name the categories and require each state to include its definition of each work activity in the annual report it must file each year with HHS. Hours spent in some activities (referred to as supplemental activities) generally cannot count toward the federal work requirement unless hours are also spent in other countable activities (referred to as core activities). Some activities have restrictions on the amount of time that can be spent in them. The 12 categories and their time restrictions are shown in table 1. Chapter 75 of Title 31, United States Code. received by the entity. The act requires state and local governments and nonprofit organizations that expend $500,000 or more in federal funds during the year to undergo an organizationwide audit. These audits focus on the entity’s internal controls and compliance with laws and regulations governing federal awards and should be viewed as a tool that raises relevant or pertinent questions rather than a document that answers all questions. Office of Management and Budget (OMB) Circular A-133, Audits of States, Local Governments, and Non-Profit Organizations, provides the federal guidance for single audits. It contains a Compliance Supplement that summarizes key information about federal programs and identifies audit objectives and suggested procedures for auditors’ use in determining compliance with the requirements. The Compliance Supplement contains information on TANF, along with over a hundred other federal programs. The information on TANF includes some key line items, including those for reporting hours of work activity, from the TANF data report that states must submit to ACF. Internal controls comprise the plans, methods, and procedures an organization uses to meet its missions, goals, and objectives. Internal controls are a series of actions and activities that occur throughout an organization’s operations and on an ongoing basis. They provide reasonable assurance that an organization achieves its objectives of (1) effective and efficient operations, (2) reliable financial reporting, and (3) compliance with laws and regulations. An organization’s internal controls over collecting and reporting data could include numerous processes and procedures, such as guidance that defines the specific data to be collected and any documentation needed to support that data and monitoring to ensure that the reported data are complete and accurate. We found that differences in how states define the 12 categories of federal work activities result in some states counting hours recipients spend in activities that other states do not consider allowable activities for meeting federal work participation requirements. Also, some states have made changes in their definitions of some categories of federal work activities, making what is measured by those states’ work participation rates inconsistent from year to year. Further, some differences across states in their classification of adult recipients can result in certain types of recipients being excluded from some states’ work participation rates but included in other states’ rates. Although PRWORA outlines 12 categories of work activities that can count toward meeting federal work participation requirements, states are able to define the specific activities that fall under each of the categories. We found that differences in how states define the 12 categories of work result in some states counting hours spent in certain activities toward meeting the work participation rate, while other states do not count hours spent in those activities. Although PRWORA outlined 12 categories of work activities that count toward meeting work participation rates, PRWORA does not prevent states from allowing their recipients to participate in other noncountable activities, such as activities that help the recipients overcome problems that prevent them from working. In our review of state TANF documents, we identified several activities that were commonly mentioned but that were treated differently by different states, such as substance abuse treatment. One state may include the activity under 1 of the 12 categories of work, while other states may consider that activity a state activity that does not count toward meeting the federal work requirement. Table 2 shows how many of the 10 reviewed states counted certain activities that were commonly mentioned in state TANF documents toward meeting federal work participation requirements. (See app. II for states included in the table). States were counting these activities toward meeting the work participation rate by defining one of the 12 categories of work as including the activities. Some states have a very broad definition for at least one federal category of work that allows the states to include many diverse activities under the category. For example, one state that defines Community Service as “an activity approved by your case manager which benefits you, your family, your community or your tribe” considered all five of the activities shown in table 2 to fall under the Community Service category. A few states had activities listed in the definition of a federal work activity that we did not see in other states’ definitions, such as bed rest, short-term hospitalizations, and personal care activities a participant is engaged in as part of recovery from a medical problem (Job Search/Job Readiness); physical rehabilitation, which could include massage, regulated exercise, or supervised activity with the intent of promoting recovery or rehabilitation (Job Search/Job Readiness); activities to promote a healthier life style that will eventually assist the recipient in obtaining employment, such as personal journaling, motivational reading, exercise at home, smoking cessation, and weight loss promotion (Job Search/Job Readiness); participating in your child’s Head Start or Early Head Start programs by participating in home visits, parent meeting presentations, and classroom volunteering (Community Service); and helping a friend or relative with household tasks and errands (Community Service). Increasing the number of activities that it counts toward the federal work participation rate should help a state increase its work participation rate and avoid incurring penalties. Out of our 10 reviewed states, 2 states counted all five of the activities shown in table 2 above, while 1 state did not count any. Such variation in the number of activities that states count toward the federal work participation rate suggests that the states are subject to different standards for work participation. Because of the differences in states’ internal controls over their work participation data (discussed in the next section of this report), the data cannot be relied upon for making comparisons across states. Therefore, we did not analyze states’ work participation rates in relation to the number of activities they counted toward work participation. Three of the 10 states we reviewed had made changes in their definitions of work activities within the past 2 years that may have affected their work participation rates and that could result in work participation rates that are not comparable over time. Kansas had a dramatic change in its work participation rate after changing some of its definitions. This state had a waiver exempting it from the 6-week limit for counting hours recipients spent in Job Search/Job Readiness activities. For states with waivers, the effective work participation rate is calculated based on the conditions of the waiver. However, ACF also calculates a without-waiver rate for states with waivers. After the state lost its waiver, it redefined some of its categories of work by placing activities previously in the Job Search/Job Readiness category (the category that had been covered by the waiver) into other categories that do not have time restrictions, such as Community Service. For this state, the 2003 with-waiver rate was significantly higher than the without-waiver rate. If the without-waiver rate had been the effective rate, the state would have been subject to penalty for not meeting the required work participation rate. One month after the waiver expired and the definitions were changed, the state’s rate without the waiver rose over 50 percentage points to reach the level of the 2003 with-waiver rate. Another state, Nevada, also moved some activities from Job Search/Job Readiness to Work Experience to avoid the 6-week time limit on counting hours spent in Job Search/Job Readiness. According to a state official, the change was made because, as a result of the 6-week time limit, field workers would sometimes make decisions that were not in the best interest of the recipients and move recipients out of activities too quickly. The state official believes that the change is likely to help raise the state’s work participation rate. Georgia added an additional activity (caring for a disabled relative who does not live with the recipient) to its Community Service category and broadened the definition of job skills training to allow for general training for a job, rather than just training for a specific job. According to a state official, these changes have helped the state increase its work participation rate. Some differences among states in their classification of recipients affect whether or not recipients are included in the work participation rate calculation. We found the following different approaches that remove recipients from the work participation rate calculation. Creating separate state programs for two-parent families. By serving two-parent families through separate state programs, states remove those families from the calculation of work participation rates. Four of the 10 states in our review (California, Georgia, Maryland, and Nevada) had created separate state programs for two- parent families. Officials from Georgia, Maryland, and Nevada said that they created the programs because they wanted to avoid having to meet the higher two-parent family work requirement. Officials from the states we reviewed with separate state programs for two-parent families said that although the states do not have to meet a federal work participation requirement for their two-parent families, they still require the adult recipients in the two-parent families to comply with the states’ work requirements. Moving recipients with significant barriers into a separate state program. Nevada placed recipients who are less likely to meet the federal work participation requirements in a separate state program, thus removing them from the work participation rate calculation. These include recipients (1) with pending applications for Supplemental Security Income, (2) with medical difficulties confirmed by a physician, (3) in the third trimester of pregnancy, and (4) caring for a disabled family member. According to a state official, these recipients are still required to participate in work activities to the extent that they are able. Reclassifying cases as child-only. California removes adults from TANF cases when they are sanctioned, thus changing the cases from adult-headed cases to child-only cases. Because child-only cases are not included in state work participation calculations, the reclassification allows the state to avoid counting noncomplying adults in the calculation, which in turn is likely to result in a higher work participation rate. According to a state official, the state’s practice of reclassifying cases this way preceded the implementation of TANF and therefore was not intended to influence the state’s TANF work participation rate. Some of the states we reviewed did not have internal controls to help ensure that reported hours of participation in work activities are in accordance with HHS guidance. Other states have implemented systematic practices to help ensure that reported hours are in accordance with HHS guidance. Officials in some states cited challenges to obtaining support for hours of participation in unsubsidized employment. Some of the states we reviewed did not have internal controls to help ensure that reported hours of participation in work activities are in accordance with HHS guidance. The HHS guidance (as discussed more fully later in this report) requires that states report hours recipients actually participated in work activities rather than hours that the recipients were scheduled to participate. Internal control weaknesses among the states we reviewed include the following: Guidance and/or standard processes allow reporting of scheduled hours. In some states, we found that the hours recorded to show how recipients plan to comply with state work requirements (scheduled hours) were reported to ACF as hours actually worked. Reporting hours scheduled instead of hours worked does not take into account unexpected events or noncompliance on the part of the recipient that would result in scheduled hours being different than the hours actually worked. Allowing scheduled hours to be reported was most common for unsubsidized employment, but in a few states, we found guidance allowing scheduled hours for other work activities, such as vocational education. In one state, guidance instructs that a set number of hours be recorded for certain activities, such as 30 hours per week for parents involved in their children’s Head Start program. However, the guidance does not indicate that the number of hours recorded should be verified to ensure that they were actual hours of participation. Lack of guidance on the type of documentation needed to support reported hours of work activities. Without guidance, there is no assurance that the local staff collecting the data know what type of documentation is adequate to support hours reported or whether any documentation is required. The type of support needed would depend on the activity but could include pay stubs and time and attendance reports. Without guidance, staff at different locations are more likely to use different standards for what support is needed. Guidance allows for reporting hours missed for good cause. Some states have guidance specifying that when recipients are absent from a scheduled activity and the case worker determines that there is a good cause for the absence, the missed hours can be reported as worked. This results in hours that were not worked being reported to ACF as worked. Insufficient monitoring to verify that hours were reported correctly. Some states do not have a monitoring process in place to perform timely reviews to verify that hours were reported correctly. Without sufficient monitoring, states cannot be assured that local staff are reporting hours that are supportable and complete. Table 3 shows the number of states with the internal control weaknesses described above for the states in our review. (See app. III for states included in the table). Six of the states in our review have at least one of the internal control weaknesses shown in table 3, and 3 of these states have at least two internal control weaknesses. Two states that did not have any of the internal control weaknesses have issued appropriate guidance and begun monitoring as part of corrective action plans developed in response to state audit findings on data problems. The states we reviewed may have internal control weaknesses over the collection and reporting of work participation data that our review was not designed to assess. For example, a state may have issued appropriate guidance and established a monitoring process; however, the state’s staff may not follow the guidance or conduct monitoring according to the required process. While some of the states we reviewed lacked internal controls, other states have implemented systematic practices to help ensure that reported data are in accordance with HHS guidance. Documentation requirements. Some states we reviewed had guidance outlining the specific documentation needed to verify actual hours for each work activity and specified when the documentation must be obtained and the hours recorded in the state’s database. Monthly audits. Officials in some states we reviewed told us they conduct monthly audits of all cases sampled for reporting to ACF to verify that hours reported were actually worked. If there is not adequate support showing that hours reported were actually worked, the data are not reported to ACF, according to state officials. State officials cited challenges to obtaining support for hours of participation in unsubsidized employment. For some states, the standard process for obtaining hours of unsubsidized employment occurs every 6 months when local staff reverify a recipient’s income and benefit eligibility. Income is typically verified with a recent pay stub, which is then used to project the hours the recipient will be working for the next 6 months. Some officials told us that trying to obtain documentation for actual hours of unsubsidized employment from recipients or employers monthly would be onerous for case workers and recipients. Officials said they feared that contacting employers frequently to verify a recipient’s employment could jeopardize the recipient’s job. In states requiring monthly documentation, such as pay stubs, for hours of work reported to HHS, state officials told us they were likely underreporting hours because of the difficulty local staff face in obtaining the required documentation. A new effort on the part of ACF may provide states with additional options for obtaining information on hours recipients spend in unsubsidized employment. ACF recently began an initiative using the National Directory of New Hires (NDNH) to help states identify whether or not recipients are eligible for TANF benefits. If a state chooses to participate, HHS will conduct data matches comparing NDNH employee data against the state’s list of TANF recipients. If the data matches identify recipients who are working and are still eligible for TANF, the data may provide states with a starting point for obtaining more complete work participation data, according to an ACF official. Because the NDNH does not contain hours worked, states would need to contact employers or recipients to obtain information on the actual hours the recipient worked, according to an ACF official. HHS has provided minimal oversight of how states define work activities. Further, HHS has limited guidance for states on reporting the appropriate hours of work activities. HHS does not have a sufficient mechanism to identify data not in accordance with ACF guidance. Under PRWORA, HHS has authority to regulate states’ definitions of work activities. However, HHS has chosen not to issue regulations for this purpose in order to promote the flexibility PRWORA provided states and in response to calls from states for as much flexibility as possible in designing their TANF programs, according to HHS officials. The current TANF regulations only repeat the 12 categories of work activities that are included in PRWORA and do not further specify activities that can and cannot be included under the 12 categories. Further, the current TANF regulations do not state that HHS will review states’ definitions of work activities to determine if the definitions are appropriate. Accordingly, HHS officials said they are unable to direct states to change their definitions of work activities when they believe the states’ definitions are inappropriate, as has occurred in the past. Although HHS has provided states with general guidance on reporting actual hours of work participation, the guidance lacks specific criteria for determining the appropriate hours to report. The requirement for reporting actual hours of work participation is not specified in federal regulations but is instead described in other documents. The guidance on the type of hours are the following: Quarterly reports containing work participation data must be “complete and accurate.” HHS responses to comments to proposed regulations Hours for which the recipient was paid may be reported as hours worked, such as paid holidays. States must report actual hours of participation for each work activity. Reporting required (or scheduled) hours of participation is inconsistent with the “complete and accurate” standard and is not acceptable. Detailed reporting instructions for TANF data report (reporting instructions) States are to report actual hours of participation. It is not acceptable to report scheduled hours of participation. States should validate actual participation in each work activity. While HHS guidance calls for states to report actual hours, ACF officials acknowledged it may be difficult or impossible to obtain information on actual hours for some activities. For example, the ACF officials cited problems states have in obtaining hours of actual participation for recipients enrolled in vocational education courses, community colleges, and universities for which attendance is not taken. ACF uses two mechanisms to identify problems with work participation data submitted by states--computer edit checks and reviews of single audit findings. However, neither mechanism provides ACF with reasonable assurance that data reported are in accordance with ACF guidance. Computer edit checks. ACF performs edit checks of the data submitted quarterly by states. The edit checks identify outliers, such as if a recipient is reported to have participated in 80 hours of work activities for 1 week. The edit checks also identify inconsistencies between data elements, such as if a recipient is reported as having earnings but is also reported as having zero hours of work. ACF notifies states of any problems identified by the edit checks so that states can correct and resubmit the data. The edit checks can help improve the data; however, they do not address the issue of verifying whether hours reported are actual hours of participation. State single audits. According to ACF officials, HHS’s primary vehicle for identifying problems with the states’ data is states’ single audit reports. Findings from the state single audits go through a review process at HHS to determine whether penalties are warranted. HHS has used findings from the single audit to take action against a state for reporting poor quality work participation data. However, ACF officials acknowledged that the work participation data reported by states may have problems that the single audits may not reveal. Our interviews with auditors in the 10 states we reviewed indicate that the level of attention given to work participation data varies greatly among the states. State auditors from 5 states (California, Georgia, Maryland, Nevada, and Ohio) told us that their most recent single audits covering the TANF program did not review the data states report to HHS on hours of participation in work activities. Out of the 5 states in which state auditors reported that the most recent single audits did test hours of work participation: Three states (Kansas, Washington, and Wisconsin) reported that the audits did not look for support of actual hours but instead compared hours shown in the state’s welfare database with the hours reported to HHS for a sample of cases. State auditors for the 3 states did not report any findings on work participation data from these reviews. Two states (New York and Pennsylvania) looked for supporting documentation to verify that hours reported to ACF were hours of actual participation for a sample of cases. In New York, the audit had no finding regarding work participation hours. The audit for Pennsylvania found that some reported hours had no supporting documentation to verify that they were actually worked. According to state officials, Pennsylvania has implemented corrective actions in response to the single audit findings. Our review of the 10 states’ internal controls identified weaknesses both in states where state auditors told us that the most recent single audit did not test the data reported to HHS on hours of participation in work activities and in states that did. ACF officials acknowledged that because of the broad nature of the single audits, the quality and focus of the audits vary from state to state. The single audit, covering hundreds of federal programs, is designed as a tool that raises relevant questions about states’ internal controls and compliance with laws and regulations governing federal awards but is not intended to answer all questions. State auditors responsible for conducting the single audits are provided with federal guidance issued by OMB—known as the Compliance Supplement. Currently the Compliance Supplement contains reference to work participation data only as a key line item for auditors to look at in the TANF data report. According to ACF officials, the fiscal year 2005 Compliance Supplement for the single audit will contain more guidance to help auditors identify whether work participation data are reported in accordance with HHS guidance. The addition to the Compliance Supplement will suggest that state auditors test a sample of cases to determine the completeness and accuracy of the data, including the proper documentation, used in calculating the work participation rate. By listing 12 categories of permissible work activities, Congress placed limits on the type of activities that states could count toward meeting federal work participation requirements. HHS regulations only restate the 12 categories of work activities and do not further specify the types of activities that can and cannot count toward meeting the federal work requirements, nor do they provide for HHS’s oversight of states’ definitions of the 12 categories. HHS has taken the position that with the current limited regulations, it will not place restrictions on the activities states can count toward meeting TANF work requirements. As a result, states have been able to include any activity in their definitions of the 12 categories of work. Several states have broadly defined 1 or more of the categories to include activities, such as substance abuse treatment, that other states provide but do not consider countable toward meeting the federal work participation requirement. Another discrepancy among states occurs with the internal controls over the data they report to HHS. For example, some states only report hours that have been verified as having been actually worked, while others report hours without verification. Because of the differences among states in the activities that they count in calculating the work participation rate and in the internal controls over the data used in the calculation, states are being measured by different standards, and the work participation rates cannot be used to compare the performance of states. Further, a high work participation rate does not necessarily indicate more engagement of TANF recipients in work activities than a lower rate. The current caseload reduction credit has greatly reduced the required level of work participation for most states. However, if TANF reauthorization results in lowering the caseload reduction credit and raising the work participation requirements, more states could be penalized, and states with strict definitions and effective internal controls may be the most susceptible to penalties. If the TANF work participation rate is to be an effective and equitable measure for assessing states’ performance and penalizing states, HHS needs to give more oversight to states’ definitions of federal work activities and internal controls over the data to help make the measure more consistent across states. We acknowledge that efforts to obtain more valid, accurate, and consistent information for this performance measure may have unintended consequences. For example, it may motivate states to use separate state programs or make other choices about the design of their TANF programs. However, a measure that is used to assess penalties needs to be clear and consistent for all those potentially subject to penalty; otherwise, the measure can result in misleading information and inequitable penalty assessments. HHS should issue regulations to specify the types of activities that can and cannot be included under the 12 categories of work activities, have HHS oversee states’ definitions of activities under the 12 set forth criteria for counting actual hours of activity and whether there are circumstances under which scheduled hours may be counted. We also recommend that HHS develop and implement a plan for working with states to improve internal controls over work participation data. This plan could make use of existing resources and include steps such as working through its regional offices to identify cost-effective internal controls being used by states, using regional offices and existing sponsored conferences to share information with states on these internal controls and to emphasize the importance of internal controls, and obtaining information from states about their experiences using the National Directory of New Hires to determine if it has potential for helping states collect more complete work participation data and if there are any useful practices to be shared with other states. HHS provided written comments on a draft of this report; these comments appear in appendix V. HHS said that the report provides it with new and useful information. HHS said it would consider making the recommended revisions in its regulations after TANF reauthorization and is exploring options for implementing the recommendation on internal controls. HHS also provided technical comments that we incorporated as appropriate. Concerning our recommendation that HHS issue regulations to provide oversight of states’ definition and more guidance on counting hours of work activities, HHS said that it will consider this recommendation when it develops the proposed rule after Congress enacts legislation to reauthorize the TANF program. We agree that addressing this recommendation during rule making after TANF reauthorization is appropriate, if TANF is reauthorized in the near future. However, TANF reauthorization has been delayed for 3 years and if it is delayed for much longer, HHS should take action to revise TANF regulations without waiting for reauthorization. Concerning our recommendation that HHS develop and implement a plan for working with states to improve internal controls over the work participation data, HHS said it recognized that more can be done to ensure increased consistency in the accuracy of the work participation data. HHS also stated that ACF is exploring options to increase oversight and provide technical assistance to states using its currently limited resources. Further, HHS noted that federal staff for TANF had been reduced by 75 percent several years ago. We added a statement about this staff reduction to the background section of the report. HHS expressed concern that the draft report did not sufficiently recognize the flexibility that Congress intended for the TANF program, and it stated that Congress did not intend that there be a consistent measure of work participation across states or that HHS make state-by-state comparisons for penalty purposes. We believe that the report does recognize the flexibility Congress provided to states. Also, we believe that the fact that Congress gave states the flexibility to design their TANF programs does not indicate that Congress did not want a meaningful measure to determine if states are meeting TANF requirements. While states have flexibility in determining what policies they will use to achieve TANF goals and requirements, the measure used to assess their performance should be defined the same way from state to state; otherwise, the rates produced by the measure cannot provide meaningful and understandable information for national policy makers and for assessing financial penalties. Further, although the use of waivers and separate state programs contributes to differences in which families are included in the work participation rate, HHS has made efforts, through its annual reporting, to ensure transparency about the rules governing these mechanisms and which states are using them. The lack of oversight of states’ definitions of categories of work activities results in inconsistencies in performance measurement, as discussed in this report, that are not transparent. HHS noted some imprecision in the draft report’s description of the work participation rate calculation. In response, we made revisions to the report. HHS also took issue with our discussion of how a state’s work participation rate changed after its waiver expired. We continue to believe that this example of how a state’s rate changed over 50 percentage points 1 month after the waiver expired is a useful illustration of how changes in definitions can affect work participation rates. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of the Department of Health and Human Services, relevant congressional committees, and others who are interested. Copies will be made available to others upon request, and this report will also be available on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me on (415) 904-2272. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Additional GAO contacts and acknowledgments are listed in appendix VI. The Temporary Assistance for Needy Families (TANF) work participation requirement is composed of (1) a requirement for a minimum number of hours recipients must participate in order to be counted as engaged in work activities and (2) a requirement for the percentage of TANF families with an adult (or minor head of household) a state must have engaged in work activities. The Department of Health and Human Services’ Administration for Children and Families (ACF) uses a formula specified in the Personal Responsibility and Work Opportunity Reconciliation Act for calculating whether states are meeting the work participation requirement. The minimum number of hours TANF recipients must participate, on average, per week to be counted as engaged in work is shown in table 4. Different base percentages were established for all families and for two- parent families. The required percentages rose over time until they reached their current levels shown in table 5. For each percentage point that a state’s welfare caseload declined from its 1995 level, the caseload reduction credit reduces the base percentage of TANF families who must be engaged in work in the state. For example, if a state’s welfare caseload declined 40 percent since 1995, then the all-family work participation rate that it must meet is 10 percent and the two-parent family work participation rate that it must meet is 50 percent. Because of significant declines in welfare caseloads that have occurred in most states since 1995, 33 of the 50 states were required to meet an all-family rate of 10 percent or less in fiscal year 2003. Each quarter, states are required to report to ACF monthly data on their TANF cases, including the number of hours each adult recipient spent in countable work activities. States have the option of reporting to ACF on all their TANF cases (the universe) or on a scientifically drawn sample of TANF cases. Using the data reported by states, ACF calculates an annual work participation rate for each state. A state’s annual work participation rate is based on the state’s average monthly rate for the year. The formula for the all-family rate is shown in figure 1. Child-only TANF cases are not included in the calculation. States have the option of disregarding from the calculation of the all-family work participation rate families with a single custodial parent and a child under age one. Other families disregarded in the calculation of the all family rate include families that are part of an ongoing research evaluation approved under Section 1115 of the Social Security Act; families that are disregarded based on an inconsistency under an approved welfare reform waiver that exempts the family; and families participating in a tribal family assistance plan or a Tribal work program (unless the state chooses to include the families in the calculation). The two-parent family rate is calculated the same way as the all-family rate, except that the calculation only includes two-parent families. Two- parent families with a disabled parent are not used in calculating the two- parent rate. As discussed in this report, our review covering 10 states found that there were differences among states in the activities counted in the rates and, in some cases, weaknesses in internal controls over the data used to calculate the rates. Therefore, these rates may not reliably reflect work participation rates and should not be used to make comparisons between states. The following staff members made major contributions to the report: Gale Harris (Assistant Director), Kathy Peyman (Analyst-in-Charge), Carolyn Blocker, Amanda Miller, Cady S. Panetta, Tovah Rom, Dan Schwimer, and Shana Wallace. Welfare Reform: Rural TANF Programs Have Developed Many Strategies to Address Rural Challenges. GAO-04-921. Washington, D.C.: Sept. 10, 2004. Supports For Low-Income Families: States Serve a Broad Range of Families through a Complex and Changing System. GAO-04-256. Washington, D.C.: Jan. 26, 2004. Welfare Reform: With TANF Flexibility, States Vary in How They Implement Work Requirements and Time Limits. GAO-02-770. Washington, D.C.: July 5, 2002. Welfare Reform: Federal Oversight of State and Local Contracting Can Be Strengthened. GAO-02-661. Washington, D.C.: June 11, 2002. Welfare Reform: States Are Using TANF Flexibility to Adapt Work Requirements and Time Limits to Meet State and Local Needs. GAO-02-501T. Washington, D.C.: Mar. 7, 2002. Welfare Reform: More Coordinated Federal Effort Could Help States and Localities Move TANF Recipients with Impairments toward Employment. GAO-02-37. Washington, D.C.: Oct. 31, 2001. Welfare Reform: Progress in Meeting Work-Focused TANF Goals. GAO-01-522T. Washington, D.C.: Mar. 15, 2001. Welfare Reform: Moving Hard-to-Employ Recipients Into the Workforce. GAO-01-368. Washington, D.C.: Mar. 15, 2001. Welfare Reform: Data Available to Assess TANF's Progress. GAO-01-298. Washington, D.C.: Feb. 28, 2001. Single Audit: Update of the Implementation of the Single Audit Act Amendments of 1996. GAO/AIMD-00-293. Washington, D.C.: Sept. 29, 2000. Welfare Reform: Work-Site-Based Activities Can Play an Important Role in TANF Programs. GAO/HEHS-00-122. Washington, D.C.: July 28, 2000. Standards for Internal Control in the Federal Government. GAO/AIMD-00-21.3.1. Washington, D.C.: Nov. 1999. Performance Plans: Selected Approaches for Verification and Validation of Agency Performance Information. GAO/GGD-99-139. Washington, D.C.: July 30, 1999. Block Grants: Issues in Designing Accountability Provisions. GAO/AIMD-95-226. Washington, D.C.: Sept. 1, 1995. Welfare to Work: JOBS Participation Rate Data Unreliable for Assessing States’ Performance. GAO/HRD-93-73. Washington, D.C.: May 5, 1993.
The debate over reauthorization of the Temporary Assistance for Needy Families (TANF) block grant has focused on work requirements and brought attention to the measure of TANF work participation. The measure is used to assess states' performance and determine whether a state is subject to penalty for not meeting TANF work requirements. The 2003 work participation rates ranged from 9 to 88 percent for the 50 states based on data they submit to the U.S. Department of Health and Human Services (HHS). To help Congress understand these rates, GAO looked at (1) how selected states are defining the categories of work activities, (2) whether selected states have implemented internal controls over the work participation data, and (3) what guidance and oversight HHS has provided states. Differences in how states define the 12 categories of work that count toward meeting TANF work participation requirements have resulted in some states counting activities that other states do not count and, therefore, in an inconsistent measurement of work participation across states. For example, 5 of the 10 states we reviewed considered caring for a disabled household or family member to count toward the federal work participation requirement, while 5 did not consider hours spent in this activity to be countable. We also found that some states made significant changes in their definitions of the categories of work. As a result, the work participation rates for these states cannot be compared from year to year. Some of the states in our review have implemented internal controls to help report work participation hours in accordance with HHS guidance, while other states lack such internal controls. Some states have not issued guidance on how to verify that reported hours were actually worked, nor do they monitor data reported by their staff to help ensure that hours are reported correctly. In contrast, a few states have systematic approaches for verifying that hours reported were worked. HHS has provided limited oversight and guidance to states on appropriately defining work activities and reporting hours of work participation. According to HHS officials, HHS has the authority to regulate states' definitions of work activities. However, to promote state flexibility, HHS chose not to issue regulations for this purpose. Further, HHS's guidance lacks specific criteria for determining the appropriate hours to report. Given that HHS has not exercised oversight of states' definitions and internal controls, states are making different decisions about what to measure. Therefore, there is no standard basis for interpreting states' rates, and the rates cannot effectively be used to assess and compare states' performance.
We have found that other countries are experiencing challenges in managing their human capital, and their experiences may prove valuable to federal agencies in the United States. For example, they are using their performance management systems to connect employee performance with organizational success to help foster a results-oriented culture. They are also implementing succession planning and management initiatives that are designed to protect and enhance organizational capacity. Collectively, these agencies’ initiatives demonstrated the following practices. Receive active support of top leadership. Top leadership actively participates in, regularly uses, and ensures the needed financial and staff resources for key succession planning and management initiatives. New Zealand’s State Services Commissioner, whose wide-ranging duties include the appointment and review of public service chief executives, formulated a new governmentwide senior leadership and management development strategy. Link to strategic planning. To focus on both current and future needs and to provide leaders with a broader perspective, the Royal Canadian Mounted Police’s succession planning and management initiative figures prominently in the agency’s multiyear human capital plan and provides top leaders with an agencywide perspective when making decisions. Identify talent from multiple organizational levels, early in their careers, or with critical skills. For example, the United Kingdom’s Fast Stream program targets high-potential individuals as well as recent college graduates, and aims to provide individuals with experiences and training linked to strengthening specific competencies required for admission to the Senior Civil Service. Emphasize developmental assignments in addition to formal training. Initiatives emphasize developmental assignments in addition to formal training to strengthen high-potential employees’ skills and broaden their experiences. For example, Canada’s Accelerated Executive Development Program temporarily assigns executives to work in unfamiliar roles or subject areas, and in different agencies. Address specific human capital challenges, such as diversity, leadership capacity, and retention. For example, the United Kingdom created a centralized development program that targets minorities with the potential to join the Senior Civil Service. Facilitating broader transformation efforts. The United Kingdom launched a wide-ranging reform program know as Modernising Government, which focused on improving the quality, coordination, and accessibility of the services government offered to its citizens and restructured the content of its leadership and management development programs to reflect this new emphasis on service delivery. In Australia, to find individuals to champion recent changes in how it delivers services and interacts with stakeholders, the Family Court of Australia identifies and prepares future leaders who will have the skills and experiences to help the organization successfully adapt to agency transformation. We at GAO have also undertaken a variety of succession planning and management initiatives consistent with these leading practices to strengthen our own internal efforts. For example, we have constructed a detailed workforce planning model and analyzed it to ensure that it hired, retained, and contracted for the appropriate number of staff with the needed competencies. In addition, we have developed certain “people measures” to assess its performance in human capital management, including measures for the attraction and retention of staff, staff utilization and development, and organizational leadership. Effective succession planning and management programs have the support and commitment of their organizations’ top leadership. Our past work has shown that demonstrated commitment of top leaders is perhaps the single most important element of successful management reform. We have reported that to demonstrate its support of succession planning and management efforts, top leadership actively participates in and regularly uses these initiatives to develop and promote individuals, and ensures that these programs receive sufficient resources. As a next step, federal agencies are to hold their senior executives accountable to address human capital issues, such as succession. We found that VHA has assigned responsibility for succession planning and management initiatives to a dedicated subcommittee, while DOL, the Census Bureau, and EPA have councils or boards that are responsible for human capital more broadly, including succession efforts. VHA has established a subcommittee and high-level positions that are directly responsible for succession planning and management. The Succession and Workforce Development Management Subcommittee reports to the Human Resources Committee of the National Leadership Board, as illustrated in figure 1. VHA’s Chief Executive Officer—the Department of Veterans Affairs’ Undersecretary for Health—chairs the board, which consists of VISN directors, chief officers, and heads of offices. In addition, VHA has established (1) a workforce planner position to help coordinate and manage VHA workforce planning activities, and (2) a nurse workforce planner position to help respond to its nursing shortage and consult with the workforce planner on certain issues, such as regional- specific recruiting challenges and training. Also, this year, VHA seeks to establish a director of succession management, a senior executive-level position. According to a VHA human capital official, the new director’s duties will include overseeing national coordination of VHA’s succession activities. At DOL, the Management Review Board, chaired by the Assistant Secretary for Administration and Management, is responsible for a variety of business issues, including human capital. The board is composed of top senior leaders from each of the agencies within DOL. According to DOL, the board’s senior leaders helped garner support for departmentwide succession planning and management efforts. For example, the board recommended funding the development of departmentwide competencies required for mission-critical occupations. The Census Bureau’s Human Capital Management Council, consisting of representatives from each of the Census Bureau’s directorates, reports to the Deputy Director of Census. According to Census Bureau human resource officials, the Council plays a key role in involving and advising top leadership on human capital issues. For example, the Council developed and presented a succession management plan that recommended, among other things, piloting job rotations and assignments to address mission- critical priorities and resources. In addition, according to a Census Bureau human resource official, the Council assesses various succession-related issues, such as recruiting and competency development for the Bureau’s senior management. In turn, senior management recently tasked a Council representative to provide monthly updates on succession-related issues. EPA’s Human Resources Council, composed of senior leaders who are to advise the EPA Administrator on human capital issues, released EPA’s “Strategy for Human Capital,” a planning document outlining EPA’s long- term human capital goals. The strategy names the offices responsible for leading each of its goals. For example, the Office of Human Resources, the Executive Resources Board, and human resources officers are to implement a strategy to “Ensure the Continuity of Leadership, Critical Expertise, and Agency Values through Succession Planning and Management/Executive Development.” According to agency human capital officials, EPA’s assistant and regional administrators and their senior managers are responsible for executing succession planning initiatives. As a next step, federal agencies are to hold their senior executives accountable for human capital issues, thus explicitly aligning individual performance expectations with organizational goals. VHA and the Census Bureau specifically mention succession planning and management in their executives’ performance plans. DOL and EPA senior executive performance expectations also include aspects of succession planning and management as part of more general human capital management responsibilities. At VHA, in their FY 2005 performance plans, chief officers and program officials are to assure that the regional strategic plans address workforce development, including a succession plan that projects workforce needs. A VHA official also stated that VHA is considering including specific succession-related performance measures, such as turnover rates for selected priority occupations, in applicable executive performance plans. The Census Bureau’s FY 2005 executive performance plans state that each senior executive “effectively develops and executes plans to accomplish strategic goals and organizational objectives, setting clear priorities and acquiring, organizing, and leveraging available resources (human, financial, budget, etc.,) and succession planning to ensure timely delivery of high quality services and products in compliance with applicable laws, regulations and policies.” Senior executives are also to demonstrate a planned approach to workforce development for managers and staff. At DOL, executives are to ensure that “staff are appropriately selected, utilized, appraised, and developed…” Executives are also to develop the talents of the staff and qualified candidates for positions in the organization, according to DOL’s latest senior executive performance management plan, revised in 2004. EPA’ s FY 2004 performance plan for senior executives states that executives should identify current and projected skill gaps and develop strategies for addressing these gaps. According to an EPA executive resource policy official, the FY 2005 senior executive performance plan is under revision, but the expectations concerning skill gaps will not change. We have also reported that to demonstrate its support of succession planning and management, top leadership ensures that these programs receive sufficient financial and staff resources and are maintained over time. DOL uses a centrally managed “crosscut fund” to supplement its succession planning and management initiatives. Component agencies within DOL submit project proposals, which DOL evaluates against established criteria, such as supporting initiatives in the department’s Human Capital Strategic Plan. According to DOL, from FY 2003-2004, the agency allocated about $6.1 million for 18 human capital projects, such as competency assessments for mission-critical occupations, and the Management Development Program, one of DOL’s major succession development programs. The Census Bureau, EPA, and VHA allocate money to various programs, including succession efforts, intended to contribute to human capital goals, but detailed funding information was not readily available from the agencies. Leading organizations use succession planning and management as a strategic planning tool that focuses on current and future needs and develops pools of high-potential staff in order to meet the organization’s mission over the long term. That is, succession planning and management is used to help the organization become what it needs to be, rather than simply to recreate the existing organization. We have previously reported on the importance of linking succession planning and management with the forward-looking process of strategic planning. Specifically, discussing how workforce knowledge, skills, and abilities will contribute to the achievement of strategic and annual performance goals, how significant gaps are identified, and what mitigating strategies are proposed (such as hiring and training) can show the connection between succession planning and strategic planning. All four agencies have begun to link their succession planning to their strategic goals. We previously reported that EPA’s human capital strategy lacked some key elements, including the linking of human capital objectives to strategic goals. Since then, EPA’s current strategic plan recognizes that human capital management spans its 5 strategic goals and identifies specific workforce knowledge, skills, and abilities to achieve each goal. For example, as illustrated in figure 2, to achieve its goal for “Clean Air and Global Climate Change,” EPA states that its workforce planning, hiring, and training activities will emphasize risk assessment, including environmental- risk modeling and monitoring, economic analysis, and standard setting, among other factors. Separately, the succession plan states that the agency faces a number of future challenges, such as global pollution, and identifies key drivers shaping the agency’s future work, such as science and technology advancements, budget constraints, administration priorities, agricultural practices, public expectations, and the media’s influences. To respond to these drivers, EPA states that its employees must have the capacity to build stronger working partnerships, increase on-site problem solving, and enhance internal and external communication practices. As a component of VA, VHA recognizes VA’s strategic objective to “recruit, develop and retain a competent, committed and diverse workforce that provides high quality service to veterans and their families” in its Workforce Succession Strategic Planning Guide. To achieve this objective, VHA identifies a number of strategic assumptions about the future of veterans’ health care. For example, it states that health care delivery will become more patient centered, that patients will be seen based on need instead of a predetermined schedule, and the use of in-home and interactive technology will increase, along with noninstitutional long-term care. Although VHA states that technological advances will improve access and quality of care for veterans, it does not anticipate significant impacts on the need for health care professionals over the next 5 years, and expects to continue to compete for scarce health care professionals in certain occupations. DOL states that to meet its strategic goal of ensuring a competitive 21st century workforce, it plans to identify skill gaps, assess training needs, and recruit new employees. For example, DOL plans to shift from a historical enforcement role to compliance assistance and consultation, requiring stronger skills in communication and analysis. DOL seeks to develop more skills in technology and project management as well as in strategic planning, quantitative analysis, and analytical thinking for a more “business-like” management approach. To attract and retain employees with such skills, DOL launched the MBA Fellows program in 2002, which it considers one of its major succession development programs. The 2-year developmental program includes rotational assignments, mentoring, and promotional opportunities for successful graduates. In FY 2004, DOL reported retaining 89 percent of its MBA Fellows after 2 years. Among the Census Bureau’s strategic goals is its unique requirement to conduct the Decennial Census. According to the agency strategic plan, the Bureau plans to reengineer the 2010 Census so that it “is cost-effective, provides more timely data, improves coverage accuracy, and reduces operational risk.” The agency will accomplish this by collecting information on a yearly basis, enhancing address databases, using local geographic information, and undertaking operational tests of these new sources and methods. In its human capital plan, the Bureau acknowledges that reengineering the 2010 Census requires new skills in project, contract, and financial management; advanced programming and technology; and statistics, mathematics, economics, quantitative analysis, marketing, demography, and geography. To help obtain these skills, the Bureau has established training programs and developed competency guides. For example, it has instituted a Project Management Master’s Certificate Program and an Information Technology Master’s Certificate Program. All program managers now are to receive project management training. Leading organizations use succession planning and management to identify the talent required to achieve their goals. We have also identified key principles for effective workforce planning including determining the critical skills and competencies that will be needed to achieve current and future programmatic results; developing strategies that are tailored to address gaps in number, deployment, and alignment of human capital approaches for enabling and sustaining the contributions of all critical skills and competencies; and monitoring and evaluating the agency’s progress toward its human capital goals and the contribution that human capital results have made toward achieving programmatic results. VHA, EPA, and DOL have identified gaps in occupations or competencies in their mission-critical workforce to achieve their goals, have undertaken strategies to address these gaps, and plan to or are taking steps to monitor their progress. By doing so, they can make more informed planning decisions and help appropriately focus succession efforts. While the Census Bureau has identified and is recruiting for its mission-critical occupations, it could achieve similar benefits if it more closely monitors its mission-critical workforce as it plans for the 2010 Decennial Census. VHA has identified 13 occupations it deems as national priorities for recruitment and retention, including registered nurses, physicians, and nuclear medicine technicians, among others. VHA uses a Web-based tool with a workforce strategic planning template to help project its needs in these mission-critical occupations. Each VISN completes a comprehensive and detailed regional workforce assessment that projects staffing needs for priority occupations for at least the next 5 years. These projections are based on anticipated resignations, retirements, other separations, and future mission needs. VHA’s workforce planner considers these data when projecting national staffing needs. For example, as illustrated in figure 2, VHA anticipates hiring 3,403 nurses in FY 2005 and 21,796 nurses from FY 2006 through FY 2011. This national projection includes, for example, the VISN 16 assessment that it will need from 220 to 238 nurses from FY 2005 to FY 2008. VHA also monitors and reports changes in its mission-critical workforce based on these data. For example, VHA reports that it increased the total nurses it had on-board by 6.2 percent or 2,184 from FY 1999 to FY 2004. VHA states that the succession programs implemented since 1999 have helped it to meet these mission-critical needs and, therefore, it does not plan to implement additional programs. We previously recommended that EPA comprehensively assess its workforce needs. Subsequently, EPA identified 18 priority occupations, including physical scientists, biologists, chemists, and attorneys. EPA projects each occupation’s retirement, attrition, and accession rates based on historical averages. For example, EPA estimates that approximately 20 percent of the managers and supervisors in 10 of the 18 priority occupations will leave by 2008, mostly due to retirements. In addition, human capital officials stated that the agency’s strategy has been on strengthening mission-critical competencies among their priority occupations. For example, EPA has identified 12 technical competencies, such as information management and sciences and biological sciences, and 12 cross-occupational competencies, such as teamwork and oral communication, that are essential for the agency to acquire, retain, or develop to accomplish its future mission. EPA plans to address emerging mission-critical competencies and gaps in priority occupations through recruitment and development. EPA also plans to update its 2004 strategic workforce planning effort on a cyclical basis to monitor progress in closing any gaps, but the agency did not indicate specific time frames for these updates. DOL has identified 27 mission-critical occupations, such as investigators, workforce development specialists, and mining engineers as well as the skills needed for each occupation, which it specifies in competency models. For example, for criminal investigators, DOL identified skills such as external awareness and interpersonal communication in addition to the knowledge and conduct of investigations. DOL has also inventoried the skills of its on-board mission-critical workers through the department’s mission-critical Skills Assessment Initiative. DOL reports that its component agencies are developing action plans to reduce or close skill gaps which DOL is incorporating into its human capital planning and reporting process. In addition, DOL has developed performance measures that are designed to help it gauge its organizational capacity, as illustrated in figure 4. For example, for FY 2004 DOL reported a 5 percent turnover rate of its mission- critical employees during their first year, meeting its goal of less than 10 percent. Likewise, DOL reported a 19.5 percent turnover rate during their first 3 years, meeting its goal of less than 25 percent. In addition, DOL reported a 95.4 percent FTE utilization rate, the percentage of filled and authorized, full-time equivalent positions, for FY 2004, compared with a 98 percent goal. FY 2004 (actual) Annually The Census Bureau has identified its mission-critical occupations and is recruiting for statisticians, mathematical statisticians, information technology specialists, cartographers, and geographers on its employment Web site. According to an agency human capital official, the Census Bureau does not monitor or assess gaps in numbers by mission-critical occupation, but focuses on “building infrastructure” by recruiting and developing competencies. The same official stated that the Bureau delegates decisions to line managers to fill vacancies, and thus there is no need to assess workers by mission-critical categories. To assist these managers, the Bureau reports that an electronic hiring system allows them to identify competencies for each vacancy, and that line managers engage in a continuing dialogue with senior managers, the Hiring Coordinators Group, and the Human Capital Management Council to address hiring needs. Nevertheless, while line managers are appropriately concerned with filling vacancies, as noted earlier, the Bureau has also acknowledged that reengineering the 2010 Decennial Census requires new competencies. By not monitoring its mission-critical occupations more closely and at a higher level, Census may not know overall if it is acquiring the skills it needs to be prepared to conduct the 2010 Decennial Census as efficiently or effectively as possible. Effective training and development programs can enhance the federal government’s ability to achieve results. Further, effective succession planning and management efforts identify talent from multiple organizational levels, early in their careers, or with critical skills as well as provide both formal training and opportunities for rotational, developmental, or “stretch” assignments, to strengthen high-potential employees’ skills and to broaden their experience and perspective. While all four agencies offer core succession training and development programs, they each can seek opportunities to achieve efficiencies through more coordination and sharing of these programs. In addition, establishing valid measures to better evaluate how these programs affect organizational capacity can give agency decision makers credible information to justify training and development programs’ value. All four agencies offer programs to train and develop their entry-, middle-, and senior-level employees. These programs provide opportunities for formal training, and all but one program offers rotational or developmental assignments. Table 1 provides a summary of core succession training and development programs by agency. At the senior level, all four agencies have succession training and development programs intended to enhance leadership skills, primarily through SES candidate development programs. For example, EPA’s SES Candidate Development Program—designed to prepare a cadre of leaders to fill future vacant executive positions in the agency and to maintain valuable institutional knowledge—requires candidates to complete an executive development plan and work with an SES mentor and executive coach to help define career goals and provide guidance. The program also requires participants to complete at least 80 hours of formal leadership development training, as well as complete a 4-month developmental assignment. DOL and VHA have similar programs in place. The Census Bureau, as a component of DOC, participates in DOC’s SES Candidate Development Program. The four agencies also have programs intended to develop the leadership and supervisory skills for middle-level managers. For example, VHA’s program named “VISN LEAD” provides an opportunity for high-potential employees in field locations to receive coaching and mentoring, create a personal development plan, and join with special VISN-wide project task teams, while retaining their current responsibilities. EPA’s Mid-level Development Programs, DOL’s Management Development Program, and DOC’s Executive Leadership Development Program—in which the Census Bureau participates—all offer similar opportunities. At the entry level, all agencies have programs intended to develop employees and provide them with the foundation for future leadership. For example, DOL’s MBA Fellows program requires participants to take a minimum of four rotational assignments and core training classes, complete a personal development plan, and work with a senior-level mentor, among other activities. Targeting recent MBA graduates, DOL established its program not only to address increased departmentwide needs for business and project-management skills, but also to create a cadre of future department leaders. EPA’s Intern Program and Rotational Program, VHA’s Facility LEAD Program, and DOC’s Aspiring Leaders Development Program, in which the Census Bureau participates, are similar in nature. According to agency human capital officials, other programs also contribute to their succession efforts. For example, the Census Bureau has established certificate programs in project management and leadership for all employees to develop and enhance these specific skills. The Bureau also has a mathematical statisticians program, which, according to the Deputy Director, provides career enhancement opportunities designed to help develop and retain employees in this critical occupation. Similarly, DOL has a Career Assistance Program that provides employees at all levels with career planning advice and other development assistance. In addition, the agencies use formal mentoring or coaching programs to help guide employees throughout their career. As agencies implement their core succession training and development programs, they must plan and prepare for the possibility of significant and recurring constraints on their resources, in light of fiscal and budgetary constraints. Recognizing this, leading agencies look for opportunities to coordinate and share their efforts and create synergies through benchmarking with others, achieving economies of scale, limiting duplication of efforts, and enhancing the effectiveness of programs, among other things. An example of such a coordinated and shared training effort is the recent announcement of a new partnership by the Office of Federal Procurement Policy, Department of Defense, and the General Services Administration. The initiative is geared toward the civilian and defense acquisition workforces, and is intended to provide similar training and development opportunities for acquisition personnel across all three agencies with the goal of sharing best practices, among other things. OPM has begun to serve as a bridge for agencies to seek opportunities to coordinate their succession training and development programs as it shifts its role from less of a rule maker and enforcer to more of a strategic partner in leading and supporting agencies’ human capital management. For example, OPM established a governmentwide Federal Candidate Development Program (Fed CDP). OPM expects the 14-month program to help agencies meet their SES succession planning goals and contribute to the government’s efforts to create a high-quality SES leadership corps. Participating agencies may select, without further competition, people who have successfully completed the Fed CDP training program. In addition, we have testified that approaches to interagency collaboration, such as the CHCO Council, have emerged as an important central leadership strategy and that agency collaboration can serve to institutionalize many management policies governmentwide. The Leadership and Succession Planning Subcommittee of the CHCO Council is charged with reviewing leadership development, among other things, and is a possible mechanism to help agencies coordinate succession training and development programs. While some agencies’ human capital officials acknowledged the potential benefits of coordinating succession training and development programs with other agencies or departments, they all could do more to seek coordination and sharing opportunities. Cognizant human capital and training officials stated that they had not actively sought opportunities to coordinate core succession training and development programs. Although EPA plans to select one senior executive through the Fed CDP, human capital officials stated they had not extensively explored the idea of coordinating with other agencies for their core succession training and development. VHA human capital officials said they did not coordinate further because they have specialized skill needs. DOL and Census Bureau human capital managers also stated that they had not partnered with other outside agencies to coordinate their core succession training and development programs. By not actively seeking to coordinate and share core succession training and development programs, agencies may miss a potentially valuable opportunity to gain efficiency, which may be especially important in the current budget environment. Decision makers need credible information to justify training and development programs’ value. We have also reported that agencies need credible information to assess how their training and development programs affect organizational performance and enhance organizational capacity. We have observed in our guide for assessing strategic training and development that while not all training and development programs require, or are suitable for, higher levels of evaluation, establishing valid performance measures can ensure that agencies adequately address their development objectives. Moreover, our guide states that such measures should go beyond input and output data, and can include data on quality, costs, and time. We also recognize, however, that agencies need to scale their efforts depending on the program. Factors to consider when deciding on the appropriate level of evaluation include the estimated costs of training efforts, size of training audience, and program visibility, among other things. All four agencies are able to report on participation and cost related to their succession training and development programs. For example, 12 Census Bureau employees participated in DOC’s Aspiring Leaders Development Program in FY 2004, with an average cost of $6,267 per participant, according to the Bureau. In addition, the Census Bureau and DOL have also identified outcome measures related to the performance of some of their succession-related training and development programs. For example, the Census Bureau evaluates, among other things, the extent to which certified project managers are using the skills they have learned in the Project Management Masters Certificate Program. Only DOL has identified measures intended to provide an understanding of core succession training and development programs’ effects on organizational capacity. Figure 5 illustrates a selection of these measures. For example, by considering the retention rate for MBA Fellows, DOL can make informed planning decisions about the potential availability of certain skill sets in the department as well as when to initiate a new program and how many students to include in it. DOL reported that in FY 2004, it retained 89 percent of its MBA fellows after 2 years and has a goal of 75 percent after 3 years. DOL also tracks SES “bench strength,” a ratio of senior executives who are in training or have completed training to those projected to leave. DOL reported a 96 percent “bench strength” for its senior executives in FY 2004, exceeding its goal of 70 percent. The Census Bureau, VHA, and EPA could better demonstrate their programs’ value in providing future talent by identifying outcome-oriented measures and evaluating the extent to which these programs enhance their organizations’ capacity. Leading organizations recognize that diversity, ways in which people in a workforce are similar and different from one another, is an organizational strength and that succession planning is a leading diversity management practice. Given the retirement projections for the federal government that could create vacancies, agencies can use succession planning and management as a critical tool in their efforts to enhance diversity in their leadership positions. All of the selected agencies have recognized the importance of diversity to a successful workforce and use succession planning and management efforts to enhance their workforce diversity. VA requires all of its administrative staff offices to produce workforce and succession plans aligned with overall VA strategic planning. VHA states that although its overall workforce is fairly diverse, women and minorities are not well represented in leadership positions nor are they well represented in the pipeline to such positions. We have reported that VHA has integrated diversity planning into its succession efforts. As part of their regional succession plans, VISNs submit diversity information to VHA for national planning. VHA then analyzes the diversity of its top-priority occupations, highlights underrepresentation of certain demographic groups in specific mission-critical occupations, and provides guidance to focus recruiting efforts to enhance diversity. For example, VHA states that White females and American Indian/Alaskan Native females are underrepresented in the nurse occupation and advises that recruitment efforts should focus on them. In addition, VHA tracks applicant diversity for the Executive Career Field Candidate Development Program, one of VHA’s core succession training and development programs, and reports that applicants to this program are drawn from a diverse pool. EPA has stated in its human capital plan that a diverse workforce makes the agency a more effective and healthy organization that is better able to relate to the American people and develop more creative and workable solutions. EPA credits its Intern Program, one of its core succession training and development programs, with attracting and retaining a diverse group of employees based on a 2003 assessment of the program. For example, the assessment found that EPA interns were more ethnically diverse than other comparable groups of hires. As part of its diversity action plan, EPA reports that it is expanding targeted recruitment initiatives to identify well-qualified candidates for mission-critical occupations. In addition, regional offices report succession-related efforts intended to enhance diversity initiatives, such as mentoring, leadership, and career development programs, and workforce demographic analyses, among other activities. DOL identifies a strategic initiative to enhance diversity in management and mission-critical occupations in its human capital plan. To help it achieve this initiative, DOL monitors and evaluates diversity information for its mission-critical occupations annually, and has identified “pockets of low participation” for certain minority groups, such as Hispanics. In addition, DOL has reported a higher percentage of women and Hispanics in its three core succession training and development programs than in its general workforce. The Census Bureau has established a diversity program office to manage the Bureau’s diversity efforts. Bureau officials stated that because of the highly specialized nature of the Bureau’s work, such as the use of statistics and mathematics, and the relatively small pool of people trained in these areas, it is difficult to enhance diversity in several critical occupation categories. As part of its combined diversity and recruiting initiative, the Bureau has established a specific recruiting team for mathematical statisticians, one of its highlighted mission-critical occupations. The Bureau also has various targeted recruiting efforts at academic institutions and community organizations with high Hispanic and other minority enrollment, and various Hispanic or Latino Chambers of Commerce. The Census Bureau, DOL, EPA, and VHA have all implemented succession planning and management efforts that collectively are intended to strengthen organizational capacity. Generally, these efforts receive top leadership support, link with strategic planning, identify critical skills gaps and strategies to fill them, offer training and development programs for high-potential employees, and enhance diversity. Nevertheless, given the nation’s large current budget deficit and long-range fiscal imbalance, Congress is likely to place increasing emphasis on agencies to exercise fiscal restraint. Given this environment, these agencies can look for opportunities to coordinate and share their succession training and development programs to achieve economies of scale, limit duplication of efforts, increase efficiency, and enhance the effectiveness of their programs. For example, all four agencies emphasize rotational or developmental assignments and formal training, and they may have opportunities to coordinate and share these assignments and training with each other or other federal agencies or departments. Agencies can also work with OPM and the CHCO Council to determine how they can better leverage other agencies’ succession training and development programs. Furthermore, it is increasingly important for agencies to evaluate their training and development programs to be able to demonstrate how these efforts enhance organizational capacity. While the Census Bureau, EPA, and VHA have some information on their succession training and development programs, such as participation and cost, they can take additional steps, such as enhanced evaluations, to justify these programs’ value. DOL has identified measures intended to provide an understanding of these programs’ effects on organizational capacity. Finally, although the Census Bureau has identified and is recruiting for its mission-critical occupations, it can better monitor its mission-critical workforce. By not monitoring more closely and at a higher level than line managers, the Bureau may not know how to best focus its succession planning efforts, and ultimately how well it is prepared for major tasks, such as the 2010 Decennial Census. To help agencies reinforce their succession planning and management efforts, and make well informed planning decisions, we recommend a number of actions. The Secretary of Commerce should ensure that the Director of Census takes the following three actions: Strengthen the monitoring of its mission-critical workforce by identifying mission-critical workforce gaps, developing strategies to address gaps, evaluating progress toward closing gaps, and adjusting strategies accordingly. Seek appropriate opportunities to coordinate and share core succession training and development programs with other outside agencies to achieve economies of scale, limit duplication of efforts, benchmark with high-performing agencies, keep abreast of current practices, enhance efficiency, and increase the effectiveness of its programs. Evaluate core succession training and development programs to assess the extent to which programs contribute to enhancing organizational capacity. When deciding the appropriate analytical approach and level of evaluation, the Bureau should consider factors such as estimated costs of training efforts, size of training audience, and program visibility, among other things. The Administrator of EPA should take the following two actions: Seek appropriate opportunities to coordinate and share core succession training and development programs with other outside agencies to achieve economies of scale, limit duplication of efforts, benchmark with high-performing agencies, keep abreast of current practices, enhance efficiency, and increase the effectiveness of its programs. Evaluate core succession training and development programs to assess the extent to which programs contribute to enhancing organizational capacity. When deciding the appropriate analytical approach and level of evaluation, EPA should consider factors such as estimated costs of training efforts, size of training audience, and program visibility, among other things. The Secretary of Labor should take the following action: Seek appropriate opportunities to coordinate and share core succession training and development programs with other outside agencies to achieve economies of scale, limit duplication of efforts, benchmark with high-performing agencies, keep abreast of current practices, enhance efficiency, and increase the effectiveness of its programs. The Secretary of VA should take the following two actions: Seek appropriate opportunities to coordinate and share core succession training and development programs with other outside agencies to achieve economies of scale, limit duplication of efforts, benchmark with high-performing agencies, keep abreast of current practices, enhance efficiency, and increase the effectiveness of its programs. Evaluate core succession training and development programs to assess the extent to which programs contribute to enhancing organizational capacity. When deciding the appropriate analytical approach and level of evaluation, VHA should consider factors such as estimated costs of training efforts, size of training audience, and program visibility, among other things. We provided a draft of this report to the Secretaries of Commerce, Labor, and VA and the Administrator of EPA for their review and comment. In addition, we provided a draft of this report to the Acting Director of OPM and the CHCO Council’s Leadership and Succession Planning Subcommittee for their information. VA agreed with our findings and recommendations. In response to our recommendation to seek opportunities to coordinate and share core succession training and development programs, VA suggested that OPM could act as a “clearinghouse” by gathering and publishing curricula and other relevant training information from agencies, thus enabling agencies to identify existing training programs across the government. We present VA’s written comments in appendix II. DOC and the Census Bureau agreed with our findings and our recommendations to seek opportunities to coordinate core succession training and development programs and to evaluate the extent to which these programs enhance organizational capacity. In response to our recommendation to strengthen the monitoring of its mission-critical workforce, the Census Bureau stated that its existing approach is effective in meeting its needs. However, as we discussed earlier, the Census Bureau acknowledges that reengineering the 2010 Decennial Census requires new competencies. By not strengthening the monitoring of its mission-critical workforce, the Census is at increased risk that it will not have the skills it needs to be prepared to conduct the 2010 Census as efficiently or effectively as possible. For example, a lesson from the 2000 Census was that while contracts for various projects supported decennial census operations, they did so in many instances at a higher cost than necessary because the Census Bureau did not have sufficient contracting and program staff with the training and experience to manage them. We present DOC’s and the Census Bureau’s written comments in appendix III. DOL did not take issue with our findings, stated that it will consider our recommendations, and provided technical comments, which we incorporated as appropriate. EPA did not comment on our recommendations, but provided a technical comment, which we incorporated. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its date. At that time, we will provide copies of this report to other interested congressional parties; the Secretaries of Commerce, Labor, and VA; the Administrator of EPA; the Director of Census; the Acting Director of OPM; and the CHCO Council’s Leadership and Succession Planning Subcommittee. We will also make this report available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me on (202) 512-6806 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To review how federal agencies are implementing succession planning and management efforts, we selected the Department of Labor (DOL), the Veterans Health Administration (VHA), the Environmental Protection Agency (EPA), and the Census Bureau for our review. We considered the nature of their succession challenges, agency missions, and prior GAO human capital work conducted at these agencies. These agencies represent an array of organizational structures, missions, and succession challenges. We analyzed strategic, human capital, workforce, succession, and training and development plans, performance contracts, human capital team charters, and diversity information from the selected agencies. In addition, we reviewed policies and guidance on succession-related issues from the Office of Personnel Management (OPM), the Equal Employment Opportunity Commission (EEOC), and the Merit Systems Protection Board (MSPB) because of their responsibilities for ensuring the fair application of personnel decisions, such as selection for training and development programs. We also interviewed agency, OPM, EEOC, and MSPB officials involved with strategic, human capital, and succession planning and management. The scope of our work did not include independent evaluation or verification of the effectiveness of the succession planning and management initiatives used in the four agencies, including any performance results that agencies attributed to specific practices or aspects of their programs. We assessed the reliability of staffing and projection data provided to us by the Census Bureau, DOL, EPA, VHA, and OPM to ensure the data we used in this report were complete and accurate by (1) interviewing agency officials knowledgeable about the data and (2) performing manual and electronic testing, when applicable. We determined that these data were sufficiently reliable for the purposes of this engagement. To get the varied perspectives of agencies’ staff located in headquarters and regional offices, we interviewed agency officials in Washington, D.C.; Charlotte, North Carolina; and Los Angeles and San Francisco, California. We conducted our study from June 2004 through April 2005. In addition to the contact named above, Lisa Shames, Naved Qureshi, Peter Rumble, Jennifer Cooke, Erin Murello, and Elena Lipson made key contributions to this report.
As the federal government confronts an array of challenges in the 21st century, it must employ strategic human capital management, including succession planning, to help meet those challenges. Leading organizations go beyond a succession planning approach that focuses on replacing individuals and engage in broad, integrated succession planning and management efforts that focus on strengthening current and future organizational capacity. GAO reviewed how the Census Bureau, Department of Labor (DOL), the Environmental Protection Agency (EPA) and the Veterans Health Administration (VHA) are implementing succession planning and management efforts. The Census Bureau, DOL, EPA, and VHA have all implemented succession planning and management efforts that collectively are intended to strengthen organizational capacity. However, in light of governmentwide fiscal challenges, the agencies have opportunities to enhance some of their succession efforts. While all of the agencies have assigned responsibility for their succession planning and management efforts to councils or boards, VHA has established a subcommittee and high-level positions that are directly responsible for its succession efforts. Also, VHA and the Census Bureau specifically mention succession planning and management as performance expectations in their executives' performance plans. The four agencies have begun to link succession efforts to strategic planning. For example, DOL plans to shift from a historical enforcement role to a compliance assistance and consulting role, requiring stronger skills in communication and analysis. To attract and retain employees with such skills, DOL launched the Masters in Business Administration Fellows program in 2002, which it considers one of its major succession training and development programs. Monitoring mission-critical workforce needs helps make informed planning decisions. DOL, EPA, and VHA have identified gaps in occupations or competencies, have undertaken strategies to address these gaps, and are planning or are taking steps to monitor their progress in closing these gaps. The Census Bureau could strengthen the monitoring of its mission-critical occupations more closely and at a higher level to ensure it is prepared for the 2010 Decennial Census. Effective training and development programs can enhance the federal government's ability to achieve results. All of the agencies' succession efforts include training and development programs at all organizational levels. However, in the current budget environment, there are opportunities to coordinate and share these programs and create synergies through benchmarking with others, achieving economies of scale, limiting duplication of efforts, and enhancing the effectiveness of programs, among other things. Performance measures for these programs can also help agencies evaluate these programs' effects on organizational capacity and justify their value. Finally, agencies have recognized the importance of diversity to a successful workforce and use succession planning and management to enhance their workforce diversity.